* [PATCH v1 00/73] Provide flow filter API and statistics
@ 2024-10-21 21:04 Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
` (76 more replies)
0 siblings, 77 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The list of updates provided by the patchset:
* Multiple TX and RX queues.
* Scattered and gather for TX and RX.
* RSS hash
* RSS key update
* RSS based on VLAN or 5-tuple.
* RSS using different combinations of fields: L3 only, L4 only or both, and
source only, destination only or both.
* Several RSS hash keys, one for each flow type.
* Default RSS operation with no hash key specification.
* VLAN filtering.
* RX VLAN stripping via raw decap.
* TX VLAN insertion via raw encap.
* Flow API.
* Multiple process.
* Tunnel types: GTP.
* Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
verification.
* Support for multiple rte_flow groups.
* Encapsulation and decapsulation of GTP data.
* Packet modification: NAT, TTL decrement, DSCP tagging
* Traffic mirroring.
* Jumbo frame support.
* Port and queue statistics.
* RMON statistics in extended stats.
* Flow metering, including meter policy API.
* Link state information.
* CAM and TCAM based matching.
* Exact match of 140 million flows and policies.
* Basic stats
* Extended stats
* Flow metering, including meter policy API.
Danylo Vodopianov (36):
net/ntnic: add API for configuration NT flow dev
net/ntnic: add item UDP
net/ntnic: add action TCP
net/ntnic: add action VLAN
net/ntnic: add item SCTP
net/ntnic: add items IPv6 and ICMPv6
net/ntnic: add action modify filed
net/ntnic: add items gtp and actions raw encap/decap
net/ntnic: add cat module
net/ntnic: add SLC LR module
net/ntnic: add PDB module
net/ntnic: add QSL module
net/ntnic: add KM module
net/ntnic: add hash API
net/ntnic: add TPE module
net/ntnic: add FLM module
net/ntnic: add flm rcp module
net/ntnic: add learn flow queue handling
net/ntnic: match and action db attributes were added
net/ntnic: add statistics API
net/ntnic: add rpf module
net/ntnic: add statistics poll
net/ntnic: added flm stat interface
net/ntnic: add tsm module
net/ntnic: add xstats
net/ntnic: added flow statistics
net/ntnic: add scrub registers
net/ntnic: added flow aged APIs
net/ntnic: add aged API to the inline profile
net/ntnic: add info and configure flow API
net/ntnic: add aged flow event
net/ntnic: add thread termination
net/ntnic: add age documentation
net/ntnic: add meter API
net/ntnic: add meter module
net/ntnic: add meter documentation
Oleksandr Kolomeiets (17):
net/ntnic: add flow dump feature
net/ntnic: add flow flush
net/ntnic: sort FPGA registers alphanumerically
net/ntnic: add MOD CSU
net/ntnic: add MOD FLM
net/ntnic: add HFU module
net/ntnic: add IFR module
net/ntnic: add MAC Rx module
net/ntnic: add MAC Tx module
net/ntnic: add RPP LR module
net/ntnic: add MOD SLC LR
net/ntnic: add Tx CPY module
net/ntnic: add Tx INS module
net/ntnic: add Tx RPL module
net/ntnic: add STA module
net/ntnic: add TSM module
net/ntnic: update documentation
Serhii Iliushyk (20):
net/ntnic: add flow filter API
net/ntnic: add minimal create/destroy flow operations
net/ntnic: add internal flow create/destroy API
net/ntnic: add minimal NT flow inline profile
net/ntnic: add management API for NT flow profile
net/ntnic: add NT flow profile management implementation
net/ntnic: add create/destroy implementation for NT flows
net/ntnic: add infrastructure for for flow actions and items
net/ntnic: add action queue
net/ntnic: add action mark
net/ntnic: add ation jump
net/ntnic: add action drop
net/ntnic: add item eth
net/ntnic: add item IPv4
net/ntnic: add item ICMP
net/ntnic: add item port ID
net/ntnic: add item void
net/ntnic: add GMF (Generic MAC Feeder) module
net/ntnic: update alignment for virt queue structs
net/ntnic: enable RSS feature
doc/guides/nics/features/ntnic.ini | 32 +
doc/guides/nics/ntnic.rst | 49 +
doc/guides/rel_notes/release_24_11.rst | 16 +-
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 598 ++
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 +-
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 73 +
drivers/net/ntnic/include/flow_api.h | 138 +
drivers/net/ntnic/include/flow_api_engine.h | 314 +
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 248 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 4 +
drivers/net/ntnic/include/ntnic_stat.h | 265 +
drivers/net/ntnic/include/ntos_drv.h | 24 +
.../ntnic/include/stream_binary_flow_api.h | 67 +
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 +
drivers/net/ntnic/meson.build | 20 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 6 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 +
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 30 +
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 +
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 769 +++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 99 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 147 +
.../net/ntnic/nthw/flow_api/flow_id_table.h | 26 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1171 ++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 457 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 640 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 +++
.../flow_api/profile_inline/flm_age_queue.c | 166 +
.../flow_api/profile_inline/flm_age_queue.h | 42 +
.../flow_api/profile_inline/flm_evt_queue.c | 293 +
.../flow_api/profile_inline/flm_evt_queue.h | 55 +
.../flow_api/profile_inline/flm_lrn_queue.c | 70 +
.../flow_api/profile_inline/flm_lrn_queue.h | 25 +
.../profile_inline/flow_api_hw_db_inline.c | 2850 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 374 ++
.../profile_inline/flow_api_profile_inline.c | 5276 +++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 76 +
.../flow_api_profile_inline_config.h | 129 +
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 +
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 498 ++
.../supported/nthw_fpga_9563_055_049_0000.c | 3317 +++++++----
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 11 +-
.../nthw/supported/nthw_fpga_mod_str_map.c | 2 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 5 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 48 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 205 +
drivers/net/ntnic/ntnic_ethdev.c | 750 ++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 940 +++
drivers/net/ntnic/ntnic_mod_reg.c | 93 +
drivers/net/ntnic/ntnic_mod_reg.h | 233 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 +++
drivers/net/ntnic/ntutil/nt_util.h | 12 +
75 files changed, 23703 insertions(+), 1049 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 01/73] net/ntnic: add API for configuration NT flow dev
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 02/73] net/ntnic: add flow filter API Serhii Iliushyk
` (75 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
This API allows to enable of flow profile for NT SmartNIC
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 30 +++
drivers/net/ntnic/include/flow_api_engine.h | 5 +
drivers/net/ntnic/include/ntos_drv.h | 1 +
.../ntnic/include/stream_binary_flow_api.h | 9 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 221 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 22 ++
drivers/net/ntnic/ntnic_mod_reg.c | 5 +
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++
8 files changed, 307 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 984450afdc..c80906ec50 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -34,6 +34,8 @@ struct flow_eth_dev {
struct flow_nic_dev *ndev;
/* NIC port id */
uint8_t port;
+ /* App assigned port_id - may be DPDK port_id */
+ uint32_t port_id;
/* 0th for exception */
struct flow_queue_id_s rx_queue[FLOW_MAX_QUEUES + 1];
@@ -41,6 +43,9 @@ struct flow_eth_dev {
/* VSWITCH has exceptions sent on queue 0 per design */
int num_queues;
+ /* QSL_HSH index if RSS needed QSL v6+ */
+ int rss_target_id;
+
struct flow_eth_dev *next;
};
@@ -48,6 +53,8 @@ struct flow_eth_dev {
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
uint16_t ports; /* number of in-ports addressable on this NIC */
+ /* flow profile this NIC is initially prepared for */
+ enum flow_eth_dev_profile flow_profile;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
@@ -73,6 +80,14 @@ struct flow_nic_dev {
extern const char *dbg_res_descr[];
+#define flow_nic_set_bit(arr, x) \
+ do { \
+ uint8_t *_temp_arr = (arr); \
+ size_t _temp_x = (x); \
+ _temp_arr[_temp_x / 8] = \
+ (uint8_t)(_temp_arr[_temp_x / 8] | (uint8_t)(1 << (_temp_x % 8))); \
+ } while (0)
+
#define flow_nic_unset_bit(arr, x) \
do { \
size_t _temp_x = (x); \
@@ -85,6 +100,18 @@ extern const char *dbg_res_descr[];
(arr[_temp_x / 8] & (uint8_t)(1 << (_temp_x % 8))); \
})
+#define flow_nic_mark_resource_used(_ndev, res_type, index) \
+ do { \
+ struct flow_nic_dev *_temp_ndev = (_ndev); \
+ typeof(res_type) _temp_res_type = (res_type); \
+ size_t _temp_index = (index); \
+ NT_LOG(DBG, FILTER, "mark resource used: %s idx %zu", \
+ dbg_res_descr[_temp_res_type], _temp_index); \
+ assert(flow_nic_is_bit_set(_temp_ndev->res[_temp_res_type].alloc_bm, \
+ _temp_index) == 0); \
+ flow_nic_set_bit(_temp_ndev->res[_temp_res_type].alloc_bm, _temp_index); \
+ } while (0)
+
#define flow_nic_mark_resource_unused(_ndev, res_type, index) \
do { \
typeof(res_type) _temp_res_type = (res_type); \
@@ -97,6 +124,9 @@ extern const char *dbg_res_descr[];
#define flow_nic_is_resource_used(_ndev, res_type, index) \
(!!flow_nic_is_bit_set((_ndev)->res[res_type].alloc_bm, index))
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment);
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index db5e6fe09d..d025677e25 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -41,6 +41,11 @@ enum res_type_e {
RES_INVALID
};
+/*
+ * Flow NIC offload management
+ */
+#define MAX_OUTPUT_DEST (128)
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index d51d1e3677..8fd577dfe3 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -86,6 +86,7 @@ struct __rte_cache_aligned ntnic_tx_queue {
struct pmd_internals {
const struct rte_pci_device *pci_dev;
+ struct flow_eth_dev *flw_dev;
char name[20];
int n_intf_no;
int lpbk_mode;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 10529b8843..47e5353344 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,11 +12,20 @@
#define FLOW_MAX_QUEUES 128
+/*
+ * Flow eth dev profile determines how the FPGA module resources are
+ * managed and what features are available
+ */
+enum flow_eth_dev_profile {
+ FLOW_ETH_DEV_PROFILE_INLINE = 0,
+};
+
struct flow_queue_id_s {
int id;
int hw_id;
};
struct flow_eth_dev; /* port device */
+struct flow_handle;
#endif /* _STREAM_BINARY_FLOW_API_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34e84559eb..f49aca79c1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_nic_setup.h"
#include "ntnic_mod_reg.h"
+#include "flow_api.h"
#include "flow_filter.h"
const char *dbg_res_descr[] = {
@@ -35,6 +36,24 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Resources
+ */
+
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment)
+{
+ for (unsigned int i = 0; i < ndev->res[res_type].resource_count; i += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, i)) {
+ flow_nic_mark_resource_used(ndev, res_type, i);
+ ndev->res[res_type].ref[i] = 1;
+ return i;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
@@ -55,10 +74,60 @@ int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return !!ndev->res[res_type].ref[index];/* if 0 resource has been freed */
}
+/*
+ * Nic port/adapter lookup
+ */
+
+static struct flow_eth_dev *nic_and_port_to_eth_dev(uint8_t adapter_no, uint8_t port)
+{
+ struct flow_nic_dev *nic_dev = dev_base;
+
+ while (nic_dev) {
+ if (nic_dev->adapter_no == adapter_no)
+ break;
+
+ nic_dev = nic_dev->next;
+ }
+
+ if (!nic_dev)
+ return NULL;
+
+ struct flow_eth_dev *dev = nic_dev->eth_base;
+
+ while (dev) {
+ if (port == dev->port)
+ return dev;
+
+ dev = dev->next;
+ }
+
+ return NULL;
+}
+
+static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
+{
+ struct flow_nic_dev *ndev = dev_base;
+
+ while (ndev) {
+ if (adapter_no == ndev->adapter_no)
+ break;
+
+ ndev = ndev->next;
+ }
+
+ return ndev;
+}
+
/*
* Device Management API
*/
+static void nic_insert_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *dev)
+{
+ dev->next = ndev->eth_base;
+ ndev->eth_base = dev;
+}
+
static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *eth_dev)
{
struct flow_eth_dev *dev = ndev->eth_base, *prev = NULL;
@@ -242,6 +311,154 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
return -1;
}
+/*
+ * adapter_no physical adapter no
+ * port_no local port no
+ * alloc_rx_queues number of rx-queues to allocate for this eth_dev
+ */
+static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no, uint32_t port_id,
+ int alloc_rx_queues, struct flow_queue_id_s queue_ids[],
+ int *rss_target_id, enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+
+ int i;
+ struct flow_eth_dev *eth_dev = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "Get eth-port adapter %i, port %i, port_id %u, rx queues %i, profile %i",
+ adapter_no, port_no, port_id, alloc_rx_queues, flow_profile);
+
+ if (MAX_OUTPUT_DEST < FLOW_MAX_QUEUES) {
+ assert(0);
+ NT_LOG(ERR, FILTER,
+ "ERROR: Internal array for multiple queues too small for API");
+ }
+
+ pthread_mutex_lock(&base_mtx);
+ struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
+
+ if (!ndev) {
+ /* Error - no flow api found on specified adapter */
+ NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
+ adapter_no);
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if (ndev->ports < ((uint16_t)port_no + 1)) {
+ NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
+ NT_LOG(ERR, FILTER,
+ "ERROR: Exceeds supported number of rx queues per eth device");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ /* don't accept multiple eth_dev's on same NIC and same port */
+ eth_dev = nic_and_port_to_eth_dev(adapter_no, port_no);
+
+ if (eth_dev) {
+ NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
+ adapter_no, port_no);
+ pthread_mutex_unlock(&base_mtx);
+ flow_delete_eth_dev(eth_dev);
+ eth_dev = NULL;
+ }
+
+ eth_dev = calloc(1, sizeof(struct flow_eth_dev));
+
+ if (!eth_dev) {
+ NT_LOG(ERR, FILTER, "ERROR: calloc failed");
+ goto err_exit1;
+ }
+
+ pthread_mutex_lock(&ndev->mtx);
+
+ eth_dev->ndev = ndev;
+ eth_dev->port = port_no;
+ eth_dev->port_id = port_id;
+
+ /* Allocate the requested queues in HW for this dev */
+
+ for (i = 0; i < alloc_rx_queues; i++) {
+#ifdef SCATTER_GATHER
+ eth_dev->rx_queue[i] = queue_ids[i];
+#else
+ int queue_id = flow_nic_alloc_resource(ndev, RES_QUEUE, 1);
+
+ if (queue_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: no more free queue IDs in NIC");
+ goto err_exit0;
+ }
+
+ eth_dev->rx_queue[eth_dev->num_queues].id = (uint8_t)queue_id;
+ eth_dev->rx_queue[eth_dev->num_queues].hw_id =
+ ndev->be.iface->alloc_rx_queue(ndev->be.be_dev,
+ eth_dev->rx_queue[eth_dev->num_queues].id);
+
+ if (eth_dev->rx_queue[eth_dev->num_queues].hw_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: could not allocate a new queue");
+ goto err_exit0;
+ }
+
+ if (queue_ids)
+ queue_ids[eth_dev->num_queues] = eth_dev->rx_queue[eth_dev->num_queues];
+#endif
+
+ if (i == 0 && (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE && exception_path)) {
+ /*
+ * Init QSL UNM - unmatched - redirects otherwise discarded
+ * packets in QSL
+ */
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_DEST_QUEUE, eth_dev->port,
+ eth_dev->rx_queue[0].hw_id) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1) < 0)
+ goto err_exit0;
+ }
+
+ eth_dev->num_queues++;
+ }
+
+ eth_dev->rss_target_id = -1;
+
+ *rss_target_id = eth_dev->rss_target_id;
+
+ nic_insert_eth_port_dev(ndev, eth_dev);
+
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+ return eth_dev;
+
+err_exit0:
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+
+err_exit1:
+ if (eth_dev)
+ free(eth_dev);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ NT_LOG(DBG, FILTER, "ERR in %s", __func__);
+ return NULL; /* Error exit */
+}
+
struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_backend_ops *be_if,
void *be_dev)
{
@@ -383,6 +600,10 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
+ /*
+ * Device Management API
+ */
+ .flow_get_eth_dev = flow_get_eth_dev,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bff893ec7a..510c0e5d23 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1355,6 +1355,13 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1378,10 +1385,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
uint32_t n_port_mask = -1; /* All ports enabled by default */
uint32_t nb_rx_queues = 1;
uint32_t nb_tx_queues = 1;
+ uint32_t exception_path = 0;
struct flow_queue_id_s queue_ids[MAX_QUEUES];
int n_phy_ports;
struct port_link_speed pls_mbps[NUM_ADAPTER_PORTS_MAX] = { 0 };
int num_port_speeds = 0;
+ enum flow_eth_dev_profile profile = FLOW_ETH_DEV_PROFILE_INLINE;
+
NT_LOG_DBGX(DBG, NTNIC, "Dev %s PF #%i Init : %02x:%02x:%i", pci_dev->name,
pci_dev->addr.function, pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
@@ -1681,6 +1691,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (flow_filter_ops != NULL) {
+ internals->flw_dev = flow_filter_ops->flow_get_eth_dev(0, n_intf_no,
+ eth_dev->data->port_id, nb_rx_queues, queue_ids,
+ &internals->txq_scg[0].rss_target_id, profile, exception_path);
+
+ if (!internals->flw_dev) {
+ NT_LOG(ERR, NTNIC,
+ "Error creating port. Resource exhaustion in HW");
+ return -1;
+ }
+ }
+
/* connect structs */
internals->p_drv = p_drv;
eth_dev->data->dev_private = internals;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index a03c97801b..ac8afdef6a 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,6 +118,11 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+const struct profile_inline_ops *get_profile_inline_ops(void)
+{
+ return NULL;
+}
+
static const struct flow_filter_ops *flow_filter_ops;
void register_flow_filter_ops(const struct flow_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 5b97b3d8ac..017d15d7bc 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include "flow_api.h"
+#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
#include "nthw_platform_drv.h"
#include "nthw_drv.h"
@@ -223,10 +224,23 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+const struct profile_inline_ops *get_profile_inline_ops(void);
+
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
int adapter_no);
int (*flow_filter_done)(struct flow_nic_dev *dev);
+ /*
+ * Device Management API
+ */
+ struct flow_eth_dev *(*flow_get_eth_dev)(uint8_t adapter_no,
+ uint8_t hw_port_no,
+ uint32_t port_id,
+ int alloc_rx_queues,
+ struct flow_queue_id_s queue_ids[],
+ int *rss_target_id,
+ enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path);
};
void register_flow_filter_ops(const struct flow_filter_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 02/73] net/ntnic: add flow filter API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
` (74 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Enable flow ops getter
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/create_elements.h | 13 ++++++
.../ntnic/include/stream_binary_flow_api.h | 2 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 7 +++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 46 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 ++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 ++
7 files changed, 89 insertions(+)
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
new file mode 100644
index 0000000000..802e6dcbe1
--- /dev/null
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -0,0 +1,13 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __CREATE_ELEMENTS_H__
+#define __CREATE_ELEMENTS_H__
+
+
+#include "stream_binary_flow_api.h"
+#include <rte_flow.h>
+
+#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 47e5353344..a6244d4082 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,8 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include "rte_flow.h"
+#include "rte_flow_driver.h"
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 3d9566a52e..d272c73c62 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -79,6 +79,7 @@ sources = files(
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
'ntlog/ntlog.c',
+ 'ntnic_filter/ntnic_filter.c',
'ntutil/nt_util.c',
'ntnic_mod_reg.c',
'ntnic_vfio.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 510c0e5d23..a509a8eb51 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1321,6 +1321,12 @@ eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size
}
}
+static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_ops **ops)
+{
+ *ops = get_dev_flow_ops();
+ return 0;
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1349,6 +1355,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
};
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
new file mode 100644
index 0000000000..99eb993a4b
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -0,0 +1,46 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_flow_driver.h>
+#include "ntnic_mod_reg.h"
+
+static int
+eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
+{
+ (void)eth_dev;
+ (void)flow;
+ (void)error;
+
+ int res = 0;
+
+ return res;
+}
+
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ (void)eth_dev;
+ (void)attr;
+ (void)items;
+ (void)actions;
+ (void)error;
+
+ struct rte_flow *flow = NULL;
+
+ return flow;
+}
+
+static const struct rte_flow_ops dev_flow_ops = {
+ .create = eth_flow_create,
+ .destroy = eth_flow_destroy,
+};
+
+void dev_flow_init(void)
+{
+ register_dev_flow_ops(&dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ac8afdef6a..ad2266116f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -137,3 +137,18 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+
+static const struct rte_flow_ops *dev_flow_ops;
+
+void register_dev_flow_ops(const struct rte_flow_ops *ops)
+{
+ dev_flow_ops = ops;
+}
+
+const struct rte_flow_ops *get_dev_flow_ops(void)
+{
+ if (dev_flow_ops == NULL)
+ dev_flow_init();
+
+ return dev_flow_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 017d15d7bc..457dc58794 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -15,6 +15,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nthw_fpga_rst_nt200a0x.h"
#include "ntnic_virt_queue.h"
+#include "create_elements.h"
/* sg ops section */
struct sg_ops_s {
@@ -243,6 +244,10 @@ struct flow_filter_ops {
uint32_t exception_path);
};
+void register_dev_flow_ops(const struct rte_flow_ops *ops);
+const struct rte_flow_ops *get_dev_flow_ops(void);
+void dev_flow_init(void);
+
void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 03/73] net/ntnic: add minimal create/destroy flow operations
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 02/73] net/ntnic: add flow filter API Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
` (73 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add high level API with describes base create/destroy implementation
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/create_elements.h | 51 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 223 +++++++++++++++++-
drivers/net/ntnic/ntutil/nt_util.h | 3 +
3 files changed, 270 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 802e6dcbe1..179542d2b2 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -6,8 +6,59 @@
#ifndef __CREATE_ELEMENTS_H__
#define __CREATE_ELEMENTS_H__
+#include "stdint.h"
#include "stream_binary_flow_api.h"
#include <rte_flow.h>
+#define MAX_ELEMENTS 64
+#define MAX_ACTIONS 32
+
+struct cnv_match_s {
+ struct rte_flow_item rte_flow_item[MAX_ELEMENTS];
+};
+
+struct cnv_attr_s {
+ struct cnv_match_s match;
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
+};
+
+struct cnv_action_s {
+ struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_queue queue;
+};
+
+/*
+ * Only needed because it eases the use of statistics through NTAPI
+ * for faster integration into NTAPI version of driver
+ * Therefore, this is only a good idea when running on a temporary NTAPI
+ * The query() functionality must go to flow engine, when moved to Open Source driver
+ */
+
+struct rte_flow {
+ void *flw_hdl;
+ int used;
+
+ uint32_t flow_stat_id;
+
+ uint16_t caller_id;
+};
+
+enum nt_rte_flow_item_type {
+ NT_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
+ NT_RTE_FLOW_ITEM_TYPE_TUNNEL,
+};
+
+extern rte_spinlock_t flow_lock;
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem);
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset);
+
#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 99eb993a4b..816ab0cd5c 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,16 +4,191 @@
*/
#include <rte_flow_driver.h>
+#include "nt_util.h"
+#include "create_elements.h"
#include "ntnic_mod_reg.h"
+#include "ntos_system.h"
+
+#define MAX_RTE_FLOWS 8192
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
+{
+ if (error) {
+ error->cause = NULL;
+ error->message = rte_flow_error->message;
+
+ if (rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE ||
+ rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE)
+ error->type = RTE_FLOW_ERROR_TYPE_NONE;
+
+ else
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+
+ return 0;
+}
+
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr)
+{
+ memset(&attribute->attr, 0x0, sizeof(struct rte_flow_attr));
+
+ if (attr) {
+ attribute->attr.group = attr->group;
+ attribute->attr.priority = attr->priority;
+ }
+
+ return 0;
+}
+
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem)
+{
+ int eidx = 0;
+ int iter_idx = 0;
+ int type = -1;
+
+ if (!items) {
+ NT_LOG(ERR, FILTER, "ERROR no items to iterate!");
+ return -1;
+ }
+
+ do {
+ type = items[iter_idx].type;
+
+ if (type < 0) {
+ if ((int)items[iter_idx].type == NT_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ type = NT_RTE_FLOW_ITEM_TYPE_TUNNEL;
+
+ } else {
+ NT_LOG(ERR, FILTER, "ERROR unknown item type received!");
+ return -1;
+ }
+ }
+
+ if (type >= 0) {
+ if (items[iter_idx].last) {
+ /* Ranges are not supported yet */
+ NT_LOG(ERR, FILTER, "ERROR ITEM-RANGE SETUP - NOT SUPPORTED!");
+ return -1;
+ }
+
+ if (eidx == max_elem) {
+ NT_LOG(ERR, FILTER, "ERROR TOO MANY ELEMENTS ENCOUNTERED!");
+ return -1;
+ }
+
+ match->rte_flow_item[eidx].type = type;
+ match->rte_flow_item[eidx].spec = items[iter_idx].spec;
+ match->rte_flow_item[eidx].mask = items[iter_idx].mask;
+
+ eidx++;
+ iter_idx++;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
+ return (type >= 0) ? 0 : -1;
+}
+
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset)
+{
+ (void)action;
+ (void)actions;
+ (void)max_elem;
+ (void)queue_offset;
+ int type = -1;
+
+ return (type >= 0) ? 0 : -1;
+}
+
+static inline uint16_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + port + 1;
+}
+
+static int convert_flow(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct cnv_attr_s *attribute,
+ struct cnv_match_s *match,
+ struct cnv_action_s *action,
+ struct rte_flow_error *error)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t queue_offset = 0;
+
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!internals) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Missing eth_dev");
+ return -1;
+ }
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = internals->vpq[0].id;
+ }
+
+ if (create_attr(attribute, attr) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, "Error in attr");
+ return -1;
+ }
+
+ if (create_match_elements(match, items, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in items");
+ return -1;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ if (create_action_elements_inline(action, actions,
+ MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return -1;
+ }
+
+ return 0;
+}
static int
eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
(void)eth_dev;
- (void)flow;
- (void)error;
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!flow)
+ return 0;
return res;
}
@@ -24,13 +199,47 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- (void)eth_dev;
- (void)attr;
- (void)items;
- (void)actions;
- (void)error;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ struct cnv_attr_s attribute = { 0 };
+ struct cnv_match_s match = { 0 };
+ struct cnv_action_s action = { 0 };
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t flow_stat_id = 0;
+
+ if (convert_flow(eth_dev, attr, items, actions, &attribute, &match, &action, error) < 0)
+ return NULL;
+
+ /* Main application caller_id is port_id shifted above VF ports */
+ attribute.caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ convert_error(error, &flow_error);
+ return (struct rte_flow *)NULL;
+ }
struct rte_flow *flow = NULL;
+ rte_spinlock_lock(&flow_lock);
+ int i;
+
+ for (i = 0; i < MAX_RTE_FLOWS; i++) {
+ if (!nt_flows[i].used) {
+ nt_flows[i].flow_stat_id = flow_stat_id;
+
+ if (nt_flows[i].flow_stat_id < NT_MAX_COLOR_FLOW_STATS) {
+ nt_flows[i].used = 1;
+ flow = &nt_flows[i];
+ }
+
+ break;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
return flow;
}
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 64947f5fbf..71ecd6c68c 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -9,6 +9,9 @@
#include <stdint.h>
#include "nt4ga_link.h"
+/* Total max VDPA ports */
+#define MAX_VDPA_PORTS 128UL
+
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 04/73] net/ntnic: add internal flow create/destroy API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (2 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
` (72 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
NT specific flow filter API for create/destroy flow
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 49 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 63 ++++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 14 +++++
3 files changed, 124 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index f49aca79c1..776c8e4407 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -117,6 +117,50 @@ static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
return ndev;
}
+/*
+ * Flow API
+ */
+
+static struct flow_handle *flow_create(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)attr;
+ (void)forced_vlan_vid;
+ (void)caller_id;
+ (void)item;
+ (void)action;
+ (void)error;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return NULL;
+ }
+
+ return NULL;
+}
+
+static int flow_destroy(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)flow;
+ (void)error;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return -1;
+}
/*
* Device Management API
@@ -604,6 +648,11 @@ static const struct flow_filter_ops ops = {
* Device Management API
*/
.flow_get_eth_dev = flow_get_eth_dev,
+ /*
+ * NT Flow API
+ */
+ .flow_create = flow_create,
+ .flow_destroy = flow_destroy,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 816ab0cd5c..83ca52a2ad 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -114,6 +114,13 @@ static inline uint16_t get_caller_id(uint16_t port)
return MAX_VDPA_PORTS + port + 1;
}
+static int is_flow_handle_typecast(struct rte_flow *flow)
+{
+ const void *first_element = &nt_flows[0];
+ const void *last_element = &nt_flows[MAX_RTE_FLOWS - 1];
+ return (void *)flow < first_element || (void *)flow > last_element;
+}
+
static int convert_flow(struct rte_eth_dev *eth_dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -179,7 +186,14 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
static int
eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
- (void)eth_dev;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -190,6 +204,20 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_
if (!flow)
return 0;
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, (void *)flow, &flow_error);
+ convert_error(error, &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, flow->flw_hdl,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ rte_spinlock_unlock(&flow_lock);
+ }
+
return res;
}
@@ -199,6 +227,13 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -218,8 +253,12 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
attribute.caller_id = get_caller_id(eth_dev->data->port_id);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ void *flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
convert_error(error, &flow_error);
- return (struct rte_flow *)NULL;
+ return (struct rte_flow *)flw_hdl;
}
struct rte_flow *flow = NULL;
@@ -241,6 +280,26 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
rte_spinlock_unlock(&flow_lock);
+ if (flow) {
+ flow->flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ if (!flow->flw_hdl) {
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ flow = NULL;
+ rte_spinlock_unlock(&flow_lock);
+
+ } else {
+ rte_spinlock_lock(&flow_lock);
+ flow->caller_id = attribute.caller_id;
+ rte_spinlock_unlock(&flow_lock);
+ }
+ }
+
return flow;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 457dc58794..ec8c1612d1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -242,6 +242,20 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ /*
+ * NT Flow API
+ */
+ struct flow_handle *(*flow_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 05/73] net/ntnic: add minimal NT flow inline profile
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (3 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
` (71 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The flow profile implements a all flow related operations
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 15 +++++
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
.../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
7 files changed, 174 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index c80906ec50..3bdfdd4f94 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -74,6 +74,21 @@ struct flow_nic_dev {
struct flow_nic_dev *next;
};
+enum flow_nic_err_msg_e {
+ ERR_SUCCESS = 0,
+ ERR_FAILED = 1,
+ ERR_OUTPUT_TOO_MANY = 3,
+ ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_ACTION_UNSUPPORTED = 28,
+ ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_OUTPUT_INVALID = 33,
+ ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_MSG_NO_MSG
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
+
/*
* Resources
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d272c73c62..f5605e81cb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 776c8e4407..75825eb40c 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Error handling
+ */
+
+static const struct {
+ const char *message;
+} err_msg[] = {
+ /* 00 */ { "Operation successfully completed" },
+ /* 01 */ { "Operation failed" },
+ /* 29 */ { "Removing flow failed" },
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
+{
+ assert(msg < ERR_MSG_NO_MSG);
+
+ if (error) {
+ error->message = err_msg[msg].message;
+ error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+}
+
/*
* Resources
*/
@@ -143,7 +166,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev,
return NULL;
}
- return NULL;
+ return profile_inline_ops->flow_create_profile_inline(dev, attr,
+ forced_vlan_vid, caller_id, item, action, error);
}
static int flow_destroy(struct flow_eth_dev *dev, struct flow_handle *flow,
@@ -159,7 +183,7 @@ static int flow_destroy(struct flow_eth_dev *dev, struct flow_handle *flow,
return -1;
}
- return -1;
+ return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
new file mode 100644
index 0000000000..a6293f5f82
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -0,0 +1,65 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "flow_api_profile_inline.h"
+#include "ntnic_mod_reg.h"
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ return NULL;
+}
+
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(fh);
+
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ return err;
+}
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow) {
+ /* Delete this flow */
+ pthread_mutex_lock(&dev->ndev->mtx);
+ err = flow_destroy_locked_profile_inline(dev, flow, error);
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ }
+
+ return err;
+}
+
+static const struct profile_inline_ops ops = {
+ /*
+ * Flow functionality
+ */
+ .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
+ .flow_create_profile_inline = flow_create_profile_inline,
+ .flow_destroy_profile_inline = flow_destroy_profile_inline,
+};
+
+void profile_inline_init(void)
+{
+ register_profile_inline_ops(&ops);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
new file mode 100644
index 0000000000..a83cc299b4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_H_
+#define _FLOW_API_PROFILE_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+#include "stream_binary_flow_api.h"
+
+/*
+ * Flow functionality
+ */
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+
+#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ad2266116f..593b56bf5b 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+static const struct profile_inline_ops *profile_inline_ops;
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops)
+{
+ profile_inline_ops = ops;
+}
+
const struct profile_inline_ops *get_profile_inline_ops(void)
{
- return NULL;
+ if (profile_inline_ops == NULL)
+ profile_inline_init();
+
+ return profile_inline_ops;
}
static const struct flow_filter_ops *flow_filter_ops;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index ec8c1612d1..d133336fad 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+struct profile_inline_ops {
+ /*
+ * Flow functionality
+ */
+ int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+ struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+};
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops);
const struct profile_inline_ops *get_profile_inline_ops(void);
+void profile_inline_init(void);
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 06/73] net/ntnic: add management API for NT flow profile
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (4 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
` (70 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Management API implements (re)setting of the NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 ++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 60 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 20 +++++++
.../profile_inline/flow_api_profile_inline.h | 8 +++
drivers/net/ntnic/ntnic_mod_reg.h | 8 +++
6 files changed, 102 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 3bdfdd4f94..790b2f6b03 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -55,6 +55,7 @@ struct flow_nic_dev {
uint16_t ports; /* number of in-ports addressable on this NIC */
/* flow profile this NIC is initially prepared for */
enum flow_eth_dev_profile flow_profile;
+ int flow_mgnt_prepared;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index d025677e25..52ff3cb865 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -46,6 +46,11 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+struct flow_handle {
+ struct flow_eth_dev *dev;
+ struct flow_handle *next;
+};
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 75825eb40c..4139d42c8c 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -10,6 +10,8 @@
#include "flow_api.h"
#include "flow_filter.h"
+#define SCATTER_GATHER
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -220,10 +222,29 @@ static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_de
static void flow_ndev_reset(struct flow_nic_dev *ndev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return;
+ }
+
/* Delete all eth-port devices created on this NIC device */
while (ndev->eth_base)
flow_delete_eth_dev(ndev->eth_base);
+ /* Error check */
+ while (ndev->flow_base) {
+ NT_LOG(ERR, FILTER,
+ "ERROR : Flows still defined but all eth-ports deleted. Flow %p",
+ ndev->flow_base);
+
+ profile_inline_ops->flow_destroy_profile_inline(ndev->flow_base->dev,
+ ndev->flow_base, NULL);
+ }
+
+ profile_inline_ops->done_flow_management_of_ndev_profile_inline(ndev);
+
km_free_ndev_resource_management(&ndev->km_res_handle);
kcc_free_ndev_resource_management(&ndev->kcc_res_handle);
@@ -265,6 +286,13 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
struct flow_nic_dev *ndev = eth_dev->ndev;
if (!ndev) {
@@ -281,6 +309,20 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
/* delete all created flows from this device */
pthread_mutex_lock(&ndev->mtx);
+ struct flow_handle *flow = ndev->flow_base;
+
+ while (flow) {
+ if (flow->dev == eth_dev) {
+ struct flow_handle *flow_next = flow->next;
+ profile_inline_ops->flow_destroy_locked_profile_inline(eth_dev, flow,
+ NULL);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
/*
* remove unmatched queue if setup in QSL
* remove exception queue setting in QSL UNM
@@ -455,6 +497,24 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->port = port_no;
eth_dev->port_id = port_id;
+ /* First time then NIC is initialized */
+ if (!ndev->flow_mgnt_prepared) {
+ ndev->flow_profile = flow_profile;
+
+ /* Initialize modules if needed - recipe 0 is used as no-match and must be setup */
+ if (profile_inline_ops != NULL &&
+ profile_inline_ops->initialize_flow_management_of_ndev_profile_inline(ndev))
+ goto err_exit0;
+
+ } else {
+ /* check if same flow type is requested, otherwise fail */
+ if (ndev->flow_profile != flow_profile) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: Different flow types requested on same NIC device. Not supported.");
+ goto err_exit0;
+ }
+ }
+
/* Allocate the requested queues in HW for this dev */
for (i = 0; i < alloc_rx_queues; i++) {
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a6293f5f82..c9e4008b7e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,20 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+/*
+ * Public functions
+ */
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return -1;
+}
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return 0;
+}
+
struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid,
@@ -51,6 +65,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
}
static const struct profile_inline_ops ops = {
+ /*
+ * Management
+ */
+ .done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
+ .initialize_flow_management_of_ndev_profile_inline =
+ initialize_flow_management_of_ndev_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index a83cc299b4..b87f8542ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,14 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+/*
+ * Management
+ */
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index d133336fad..149c549112 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -226,6 +226,14 @@ const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
struct profile_inline_ops {
+ /*
+ * Management
+ */
+
+ int (*done_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
+ int (*initialize_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 07/73] net/ntnic: add NT flow profile management implementation
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (5 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
` (69 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Implements functions required for (re)set NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 4 ++
drivers/net/ntnic/include/flow_api_engine.h | 10 ++++
drivers/net/ntnic/meson.build | 4 ++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 55 +++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 52 ++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 19 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 59 +++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 23 ++++++++
.../profile_inline/flow_api_profile_inline.c | 52 ++++++++++++++++
9 files changed, 278 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 790b2f6b03..748da89262 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -61,6 +61,10 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *group_handle;
+ void *hw_db_handle;
+ void *id_table_handle;
+
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 52ff3cb865..2497c31a08 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -6,6 +6,8 @@
#ifndef _FLOW_API_ENGINE_H_
#define _FLOW_API_ENGINE_H_
+#include <stdint.h>
+
/*
* Resource management
*/
@@ -46,6 +48,9 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_CPY_WRITERS_SUPPORTED 8
+
+
struct flow_handle {
struct flow_eth_dev *dev;
struct flow_handle *next;
@@ -55,4 +60,9 @@ void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
+/*
+ * Group management
+ */
+int flow_group_handle_create(void **handle, uint32_t group_count);
+int flow_group_handle_destroy(void **handle);
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f5605e81cb..f7292144ac 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -18,6 +18,7 @@ includes = [
include_directories('nthw/supported'),
include_directories('nthw/model'),
include_directories('nthw/flow_filter'),
+ include_directories('nthw/flow_api'),
include_directories('nim/'),
]
@@ -47,7 +48,10 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/flow_group.c',
+ 'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
+ 'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
new file mode 100644
index 0000000000..a7371f3aad
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "flow_api_engine.h"
+
+#define OWNER_ID_COUNT 256
+#define PORT_COUNT 8
+
+struct group_lookup_entry_s {
+ uint64_t ref_counter;
+ uint32_t *reverse_lookup;
+};
+
+struct group_handle_s {
+ uint32_t group_count;
+
+ uint32_t *translation_table;
+
+ struct group_lookup_entry_s *lookup_entries;
+};
+
+int flow_group_handle_create(void **handle, uint32_t group_count)
+{
+ struct group_handle_s *group_handle;
+
+ *handle = calloc(1, sizeof(struct group_handle_s));
+ group_handle = *handle;
+
+ group_handle->group_count = group_count;
+ group_handle->translation_table =
+ calloc((uint32_t)(group_count * PORT_COUNT * OWNER_ID_COUNT), sizeof(uint32_t));
+ group_handle->lookup_entries = calloc(group_count, sizeof(struct group_lookup_entry_s));
+
+ return *handle != NULL ? 0 : -1;
+}
+
+int flow_group_handle_destroy(void **handle)
+{
+ if (*handle) {
+ struct group_handle_s *group_handle = (struct group_handle_s *)*handle;
+
+ free(group_handle->translation_table);
+ free(group_handle->lookup_entries);
+
+ free(*handle);
+ *handle = NULL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
new file mode 100644
index 0000000000..9b46848e59
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "flow_id_table.h"
+
+#define NTNIC_ARRAY_BITS 14
+#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+
+struct ntnic_id_table_element {
+ union flm_handles handle;
+ uint8_t caller_id;
+ uint8_t type;
+};
+
+struct ntnic_id_table_data {
+ struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
+ pthread_mutex_t mtx;
+
+ uint32_t next_id;
+
+ uint32_t free_head;
+ uint32_t free_tail;
+ uint32_t free_count;
+};
+
+void *ntnic_id_table_create(void)
+{
+ struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
+
+ pthread_mutex_init(&handle->mtx, NULL);
+ handle->next_id = 1;
+
+ return handle;
+}
+
+void ntnic_id_table_destroy(void *id_table)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
+ free(handle->arrays[i]);
+
+ pthread_mutex_destroy(&handle->mtx);
+
+ free(id_table);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
new file mode 100644
index 0000000000..13455f1165
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLOW_ID_TABLE_H_
+#define _FLOW_ID_TABLE_H_
+
+#include <stdint.h>
+
+union flm_handles {
+ uint64_t idx;
+ void *p;
+};
+
+void *ntnic_id_table_create(void);
+void ntnic_id_table_destroy(void *id_table);
+
+#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
new file mode 100644
index 0000000000..5fda11183c
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+
+#include "flow_api_hw_db_inline.h"
+
+/******************************************************************************/
+/* Handle */
+/******************************************************************************/
+
+struct hw_db_inline_resource_db {
+ /* Actions */
+ struct hw_db_inline_resource_db_cot {
+ struct hw_db_inline_cot_data data;
+ int ref;
+ } *cot;
+
+ uint32_t nb_cot;
+
+ /* Hardware */
+
+ struct hw_db_inline_resource_db_cfn {
+ uint64_t priority;
+ int cfn_hw;
+ int ref;
+ } *cfn;
+};
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
+{
+ /* Note: calloc is required for functionality in the hw_db_inline_destroy() */
+ struct hw_db_inline_resource_db *db = calloc(1, sizeof(struct hw_db_inline_resource_db));
+
+ if (db == NULL)
+ return -1;
+
+ db->nb_cot = ndev->be.cat.nb_cat_funcs;
+ db->cot = calloc(db->nb_cot, sizeof(struct hw_db_inline_resource_db_cot));
+
+ if (db->cot == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ *db_handle = db;
+ return 0;
+}
+
+void hw_db_inline_destroy(void *db_handle)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ free(db->cot);
+
+ free(db->cfn);
+
+ free(db);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
new file mode 100644
index 0000000000..23caf73cf3
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_HW_DB_INLINE_H_
+#define _FLOW_API_HW_DB_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+
+struct hw_db_inline_cot_data {
+ uint32_t matcher_color_contrib : 4;
+ uint32_t frag_rcp : 4;
+ uint32_t padding : 24;
+};
+
+/**/
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
+void hw_db_inline_destroy(void *db_handle);
+
+#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c9e4008b7e..986196b408 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,6 +4,9 @@
*/
#include "ntlog.h"
+#include "flow_api_engine.h"
+#include "flow_api_hw_db_inline.h"
+#include "flow_id_table.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
@@ -14,11 +17,60 @@
int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+ if (!ndev->flow_mgnt_prepared) {
+ /* Check static arrays are big enough */
+ assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+
+ ndev->id_table_handle = ntnic_id_table_create();
+
+ if (ndev->id_table_handle == NULL)
+ goto err_exit0;
+
+ if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
+ goto err_exit0;
+
+ if (hw_db_inline_create(ndev, &ndev->hw_db_handle))
+ goto err_exit0;
+
+ ndev->flow_mgnt_prepared = 1;
+ }
+
+ return 0;
+
+err_exit0:
+ done_flow_management_of_ndev_profile_inline(ndev);
return -1;
}
int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ if (ndev->flow_mgnt_prepared) {
+ flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+
+ flow_group_handle_destroy(&ndev->group_handle);
+ ntnic_id_table_destroy(ndev->id_table_handle);
+
+ flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+
+ hw_mod_tpe_reset(&ndev->be);
+ flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
+ flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+
+ hw_db_inline_destroy(ndev->hw_db_handle);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ ndev->flow_mgnt_prepared = 0;
+ }
+
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 08/73] net/ntnic: add create/destroy implementation for NT flows
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (6 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
` (68 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Implements flow create/destroy functions with minimal capabilities
item any
action port id
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 6 +
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 105 +++
.../ntnic/include/stream_binary_flow_api.h | 4 +
drivers/net/ntnic/meson.build | 2 +
drivers/net/ntnic/nthw/flow_api/flow_group.c | 44 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 79 ++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 4 +
.../flow_api/profile_inline/flm_lrn_queue.c | 28 +
.../flow_api/profile_inline/flm_lrn_queue.h | 14 +
.../profile_inline/flow_api_hw_db_inline.c | 92 +++
.../profile_inline/flow_api_hw_db_inline.h | 64 ++
.../profile_inline/flow_api_profile_inline.c | 684 ++++++++++++++++++
13 files changed, 1129 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b9b87bdfe..1c653fd5a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,3 +12,9 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
Linux = Y
x86-64 = Y
+
+[rte_flow items]
+any = Y
+
+[rte_flow actions]
+port_id = Y
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 748da89262..667dad6d5f 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -68,6 +68,9 @@ struct flow_nic_dev {
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
+ /* linked list of all FLM flows created on this NIC */
+ struct flow_handle *flow_base_flm;
+ pthread_mutex_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 2497c31a08..b8da5eafba 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -7,6 +7,10 @@
#define _FLOW_API_ENGINE_H_
#include <stdint.h>
+#include <stdatomic.h>
+
+#include "hw_mod_backend.h"
+#include "stream_binary_flow_api.h"
/*
* Resource management
@@ -50,10 +54,107 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+enum flow_port_type_e {
+ PORT_NONE, /* not defined or drop */
+ PORT_INTERNAL, /* no queues attached */
+ PORT_PHY, /* MAC phy output queue */
+ PORT_VIRT, /* Memory queues to Host */
+};
+
+struct output_s {
+ uint32_t owning_port_id;/* the port who owns this output destination */
+ enum flow_port_type_e type;
+ int id; /* depending on port type: queue ID or physical port id or not used */
+ int active; /* activated */
+};
+
+struct nic_flow_def {
+ /*
+ * Frame Decoder match info collected
+ */
+ int l2_prot;
+ int l3_prot;
+ int l4_prot;
+ int tunnel_prot;
+ int tunnel_l3_prot;
+ int tunnel_l4_prot;
+ int vlans;
+ int fragmentation;
+ int ip_prot;
+ int tunnel_ip_prot;
+ /*
+ * Additional meta data for various functions
+ */
+ int in_port_override;
+ int non_empty; /* default value is -1; value 1 means flow actions update */
+ struct output_s dst_id[MAX_OUTPUT_DEST];/* define the output to use */
+ /* total number of available queues defined for all outputs - i.e. number of dst_id's */
+ int dst_num_avail;
+
+ /*
+ * Mark or Action info collection
+ */
+ uint32_t mark;
+
+ uint32_t jump_to_group;
+
+ int full_offload;
+};
+
+enum flow_handle_type {
+ FLOW_HANDLE_TYPE_FLOW,
+ FLOW_HANDLE_TYPE_FLM,
+};
struct flow_handle {
+ enum flow_handle_type type;
+ uint32_t flm_id;
+ uint16_t caller_id;
+ uint16_t learn_ignored;
+
struct flow_eth_dev *dev;
struct flow_handle *next;
+ struct flow_handle *prev;
+
+ void *user_data;
+
+ union {
+ struct {
+ /*
+ * 1st step conversion and validation of flow
+ * verified and converted flow match + actions structure
+ */
+ struct nic_flow_def *fd;
+ /*
+ * 2nd step NIC HW resource allocation and configuration
+ * NIC resource management structures
+ */
+ struct {
+ uint32_t db_idx_counter;
+ uint32_t db_idxs[RES_COUNT];
+ };
+ uint32_t port_id; /* MAC port ID or override of virtual in_port */
+ };
+
+ struct {
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_data[10];
+ uint8_t flm_prot;
+ uint8_t flm_kid;
+ uint8_t flm_prio;
+ uint8_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint32_t flm_nat_ipv4;
+ uint16_t flm_nat_port;
+ uint8_t flm_dscp;
+ uint32_t flm_teid;
+ uint8_t flm_rqi;
+ uint8_t flm_qfi;
+ };
+ };
};
void km_free_ndev_resource_management(void **handle);
@@ -65,4 +166,8 @@ void kcc_free_ndev_resource_management(void **handle);
*/
int flow_group_handle_create(void **handle, uint32_t group_count);
int flow_group_handle_destroy(void **handle);
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out);
+
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index a6244d4082..d878b848c2 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -8,6 +8,10 @@
#include "rte_flow.h"
#include "rte_flow_driver.h"
+
+/* Max RSS hash key length in bytes */
+#define MAX_RSS_KEY_LEN 40
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f7292144ac..e1fef37ccb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -50,6 +50,8 @@ sources = files(
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
+ 'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
index a7371f3aad..f76986b178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_group.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -53,3 +53,47 @@ int flow_group_handle_destroy(void **handle)
return 0;
}
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out)
+{
+ struct group_handle_s *group_handle = (struct group_handle_s *)handle;
+ uint32_t *table_ptr;
+ uint32_t lookup;
+
+ if (group_handle == NULL || group_in >= group_handle->group_count || port_id >= PORT_COUNT)
+ return -1;
+
+ /* Don't translate group 0 */
+ if (group_in == 0) {
+ *group_out = 0;
+ return 0;
+ }
+
+ table_ptr = &group_handle->translation_table[port_id * OWNER_ID_COUNT * PORT_COUNT +
+ owner_id * OWNER_ID_COUNT + group_in];
+ lookup = *table_ptr;
+
+ if (lookup == 0) {
+ for (lookup = 1; lookup < group_handle->group_count &&
+ group_handle->lookup_entries[lookup].ref_counter > 0;
+ ++lookup)
+ ;
+
+ if (lookup < group_handle->group_count) {
+ group_handle->lookup_entries[lookup].reverse_lookup = table_ptr;
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+
+ *table_ptr = lookup;
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+ }
+
+ *group_out = lookup;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 9b46848e59..5635ac4524 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -4,6 +4,7 @@
*/
#include <pthread.h>
+#include <stdint.h>
#include <stdlib.h>
#include <string.h>
@@ -11,6 +12,10 @@
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+#define NTNIC_ARRAY_MASK (NTNIC_ARRAY_SIZE - 1)
+#define NTNIC_MAX_ID (NTNIC_ARRAY_SIZE * NTNIC_ARRAY_SIZE)
+#define NTNIC_MAX_ID_MASK (NTNIC_MAX_ID - 1)
+#define NTNIC_MIN_FREE 1000
struct ntnic_id_table_element {
union flm_handles handle;
@@ -29,6 +34,36 @@ struct ntnic_id_table_data {
uint32_t free_count;
};
+static inline struct ntnic_id_table_element *
+ntnic_id_table_array_find_element(struct ntnic_id_table_data *handle, uint32_t id)
+{
+ uint32_t idx_d1 = id & NTNIC_ARRAY_MASK;
+ uint32_t idx_d2 = (id >> NTNIC_ARRAY_BITS) & NTNIC_ARRAY_MASK;
+
+ if (handle->arrays[idx_d2] == NULL) {
+ handle->arrays[idx_d2] =
+ calloc(NTNIC_ARRAY_SIZE, sizeof(struct ntnic_id_table_element));
+ }
+
+ return &handle->arrays[idx_d2][idx_d1];
+}
+
+static inline uint32_t ntnic_id_table_array_pop_free_id(struct ntnic_id_table_data *handle)
+{
+ uint32_t id = 0;
+
+ if (handle->free_count > NTNIC_MIN_FREE) {
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_tail);
+ id = handle->free_tail;
+
+ handle->free_tail = element->handle.idx & NTNIC_MAX_ID_MASK;
+ handle->free_count -= 1;
+ }
+
+ return id;
+}
+
void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
@@ -50,3 +85,47 @@ void ntnic_id_table_destroy(void *id_table)
free(id_table);
}
+
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
+
+ if (new_id == 0)
+ new_id = handle->next_id++;
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, new_id);
+ element->caller_id = caller_id;
+ element->type = type;
+ memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+
+ return new_id;
+}
+
+void ntnic_id_table_free_id(void *id_table, uint32_t id)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *current_element =
+ ntnic_id_table_array_find_element(handle, id);
+ memset(current_element, 0, sizeof(struct ntnic_id_table_element));
+
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_head);
+ element->handle.idx = id;
+ handle->free_head = id;
+ handle->free_count += 1;
+
+ if (handle->free_tail == 0)
+ handle->free_tail = handle->free_head;
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index 13455f1165..e190fe4a11 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -16,4 +16,8 @@ union flm_handles {
void *ntnic_id_table_create(void);
void ntnic_id_table_destroy(void *id_table);
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type);
+void ntnic_id_table_free_id(void *id_table, uint32_t id);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
new file mode 100644
index 0000000000..ad7efafe08
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+
+#include "hw_mod_flm_v25.h"
+
+#include "flm_lrn_queue.h"
+
+#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ unsigned int n = rte_ring_enqueue_zc_burst_elem_start(q, ELEM_SIZE, 1, &zcd, NULL);
+ return (n == 0) ? NULL : zcd.ptr1;
+}
+
+void flm_lrn_queue_release_write_buffer(void *q)
+{
+ rte_ring_enqueue_zc_elem_finish(q, 1);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
new file mode 100644
index 0000000000..8cee0c8e78
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_LRN_QUEUE_H_
+#define _FLM_LRN_QUEUE_H_
+
+#include <stdint.h>
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q);
+void flm_lrn_queue_release_write_buffer(void *q);
+
+#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5fda11183c..eb6bad07b8 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -3,6 +3,9 @@
*/
+#include "hw_mod_backend.h"
+#include "flow_api_engine.h"
+
#include "flow_api_hw_db_inline.h"
/******************************************************************************/
@@ -57,3 +60,92 @@ void hw_db_inline_destroy(void *db_handle)
free(db);
}
+
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size)
+{
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_COT:
+ hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/******************************************************************************/
+/* COT */
+/******************************************************************************/
+
+static int hw_db_inline_cot_compare(const struct hw_db_inline_cot_data *data1,
+ const struct hw_db_inline_cot_data *data2)
+{
+ return data1->matcher_color_contrib == data2->matcher_color_contrib &&
+ data1->frag_rcp == data2->frag_rcp;
+}
+
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cot_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_COT;
+
+ for (uint32_t i = 1; i < db->nb_cot; ++i) {
+ int ref = db->cot[i].ref;
+
+ if (ref > 0 && hw_db_inline_cot_compare(data, &db->cot[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cot_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cot[idx.ids].ref = 1;
+ memcpy(&db->cot[idx.ids].data, data, sizeof(struct hw_db_inline_cot_data));
+
+ return idx;
+}
+
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cot[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cot[idx.ids].ref -= 1;
+
+ if (db->cot[idx.ids].ref <= 0) {
+ memset(&db->cot[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cot_data));
+ db->cot[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 23caf73cf3..0116af015d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -9,15 +9,79 @@
#include "flow_api.h"
+#define HW_DB_INLINE_MAX_QST_PER_QSL 128
+#define HW_DB_INLINE_MAX_ENCAP_SIZE 128
+
+#define HW_DB_IDX \
+ union { \
+ struct { \
+ uint32_t id1 : 8; \
+ uint32_t id2 : 8; \
+ uint32_t id3 : 8; \
+ uint32_t type : 7; \
+ uint32_t error : 1; \
+ }; \
+ struct { \
+ uint32_t ids : 24; \
+ }; \
+ uint32_t raw; \
+ }
+
+/* Strongly typed int types */
+struct hw_db_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_cot_idx {
+ HW_DB_IDX;
+};
+
+enum hw_db_idx_type {
+ HW_DB_IDX_TYPE_NONE = 0,
+ HW_DB_IDX_TYPE_COT,
+};
+
+/* Functionality data types */
+struct hw_db_inline_qsl_data {
+ uint32_t discard : 1;
+ uint32_t drop : 1;
+ uint32_t table_size : 7;
+ uint32_t retransmit : 1;
+ uint32_t padding : 22;
+
+ struct {
+ uint16_t queue : 7;
+ uint16_t queue_en : 1;
+ uint16_t tx_port : 3;
+ uint16_t tx_port_en : 1;
+ uint16_t padding : 4;
+ } table[HW_DB_INLINE_MAX_QST_PER_QSL];
+};
+
struct hw_db_inline_cot_data {
uint32_t matcher_color_contrib : 4;
uint32_t frag_rcp : 4;
uint32_t padding : 24;
};
+struct hw_db_inline_hsh_data {
+ uint32_t func;
+ uint64_t hash_mask;
+ uint8_t key[MAX_RSS_KEY_LEN];
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
void hw_db_inline_destroy(void *db_handle);
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size);
+
+/**/
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data);
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 986196b408..eb1f3227ed 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,13 +4,573 @@
*/
#include "ntlog.h"
+#include "nt_util.h"
+
+#include "hw_mod_backend.h"
+#include "flm_lrn_queue.h"
+#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
#include "flow_id_table.h"
+#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
+static void *flm_lrn_queue_arr;
+
+struct flm_flow_key_def_s {
+ union {
+ struct {
+ uint64_t qw0_dyn : 7;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 7;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 7;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 7;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_proto : 1;
+ uint64_t inner_proto : 1;
+ uint64_t pad : 2;
+ };
+ uint64_t data;
+ };
+ uint32_t mask[10];
+};
+
+/*
+ * Flow Matcher functionality
+ */
+static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
+{
+ struct flow_eth_dev *dev = ndev->eth_base;
+
+ while (dev) {
+ if (dev->port_id == port_id)
+ return dev->port;
+
+ dev = dev->next;
+ }
+
+ return UINT8_MAX;
+}
+
+static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base)
+ ndev->flow_base->prev = fh;
+
+ fh->next = ndev->flow_base;
+ fh->prev = NULL;
+ ndev->flow_base = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ struct flow_handle *next = fh->next;
+ struct flow_handle *prev = fh->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base == fh) {
+ ndev->flow_base = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base_flm)
+ ndev->flow_base_flm->prev = fh;
+
+ fh->next = ndev->flow_base_flm;
+ fh->prev = NULL;
+ ndev->flow_base_flm = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
+{
+ struct flow_handle *next = fh_flm->next;
+ struct flow_handle *prev = fh_flm->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base_flm = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base_flm == fh_flm) {
+ ndev->flow_base_flm = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
+{
+ if (fd) {
+ fd->full_offload = -1;
+ fd->in_port_override = -1;
+ fd->mark = UINT32_MAX;
+ fd->jump_to_group = UINT32_MAX;
+
+ fd->l2_prot = -1;
+ fd->l3_prot = -1;
+ fd->l4_prot = -1;
+ fd->vlans = 0;
+ fd->tunnel_prot = -1;
+ fd->tunnel_l3_prot = -1;
+ fd->tunnel_l4_prot = -1;
+ fd->fragmentation = -1;
+ fd->ip_prot = -1;
+ fd->tunnel_ip_prot = -1;
+
+ fd->non_empty = -1;
+ }
+
+ return fd;
+}
+
+static inline struct nic_flow_def *allocate_nic_flow_def(void)
+{
+ return prepare_nic_flow_def(calloc(1, sizeof(struct nic_flow_def)));
+}
+
+static bool fd_has_empty_pattern(const struct nic_flow_def *fd)
+{
+ return fd && fd->vlans == 0 && fd->l2_prot < 0 && fd->l3_prot < 0 && fd->l4_prot < 0 &&
+ fd->tunnel_prot < 0 && fd->tunnel_l3_prot < 0 && fd->tunnel_l4_prot < 0 &&
+ fd->ip_prot < 0 && fd->tunnel_ip_prot < 0 && fd->non_empty < 0;
+}
+
+static inline const void *memcpy_mask_if(void *dest, const void *src, const void *mask,
+ size_t count)
+{
+ if (mask == NULL)
+ return src;
+
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+ const unsigned char *mask_ptr = (const unsigned char *)mask;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] = src_ptr[i] & mask_ptr[i];
+
+ return dest;
+}
+
+static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLM)
+ return -1;
+
+ if (flm_op == NT_FLM_OP_LEARN) {
+ union flm_handles flm_h;
+ flm_h.p = fh;
+ fh->flm_id = ntnic_id_table_get_id(fh->dev->ndev->id_table_handle, flm_h,
+ fh->caller_id, 1);
+ }
+
+ uint32_t flm_id = fh->flm_id;
+
+ if (flm_op == NT_FLM_OP_UNLEARN) {
+ ntnic_id_table_free_id(fh->dev->ndev->id_table_handle, flm_id);
+
+ if (fh->learn_ignored == 1)
+ return 0;
+ }
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->id = flm_id;
+
+ learn_record->qw0[0] = fh->flm_data[9];
+ learn_record->qw0[1] = fh->flm_data[8];
+ learn_record->qw0[2] = fh->flm_data[7];
+ learn_record->qw0[3] = fh->flm_data[6];
+ learn_record->qw4[0] = fh->flm_data[5];
+ learn_record->qw4[1] = fh->flm_data[4];
+ learn_record->qw4[2] = fh->flm_data[3];
+ learn_record->qw4[3] = fh->flm_data[2];
+ learn_record->sw8 = fh->flm_data[1];
+ learn_record->sw9 = fh->flm_data[0];
+ learn_record->prot = fh->flm_prot;
+
+ /* Last non-zero mtr is used for statistics */
+ uint8_t mbrs = 0;
+
+ learn_record->vol_idx = mbrs;
+
+ learn_record->nat_ip = fh->flm_nat_ipv4;
+ learn_record->nat_port = fh->flm_nat_port;
+ learn_record->nat_en = fh->flm_nat_ipv4 || fh->flm_nat_port ? 1 : 0;
+
+ learn_record->dscp = fh->flm_dscp;
+ learn_record->teid = fh->flm_teid;
+ learn_record->qfi = fh->flm_qfi;
+ learn_record->rqi = fh->flm_rqi;
+ /* Lower 10 bits used for RPL EXT PTR */
+ learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+
+ learn_record->ent = 0;
+ learn_record->op = flm_op & 0xf;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->prio = fh->flm_prio & 0x3;
+ learn_record->ft = fh->flm_ft;
+ learn_record->kid = fh->flm_kid;
+ learn_record->eor = 1;
+ learn_record->scrub_prof = 0;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+ return 0;
+}
+
+/*
+ * This function must be callable without locking any mutexes
+ */
+static int interpret_flow_actions(const struct flow_eth_dev *dev,
+ const struct rte_flow_action action[],
+ const struct rte_flow_action *action_mask,
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint32_t *num_dest_port,
+ uint32_t *num_queues)
+{
+ unsigned int encap_decap_order = 0;
+
+ *num_dest_port = 0;
+ *num_queues = 0;
+
+ if (action == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow actions missing");
+ return -1;
+ }
+
+ /*
+ * Gather flow match + actions and convert into internal flow definition structure (struct
+ * nic_flow_def_s) This is the 1st step in the flow creation - validate, convert and
+ * prepare
+ */
+ for (int aidx = 0; action[aidx].type != RTE_FLOW_ACTION_TYPE_END; ++aidx) {
+ switch (action[aidx].type) {
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_PORT_ID", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_port_id port_id_tmp;
+ const struct rte_flow_action_port_id *port_id =
+ memcpy_mask_if(&port_id_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_port_id));
+
+ if (*num_dest_port > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple port_id actions for one flow is not supported");
+ flow_nic_set_error(ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED,
+ error);
+ return -1;
+ }
+
+ uint8_t port = get_port_from_port_id(dev->ndev, port_id->id);
+
+ if (fd->dst_num_avail == MAX_OUTPUT_DEST) {
+ NT_LOG(ERR, FILTER, "Too many output destinations");
+ flow_nic_set_error(ERR_OUTPUT_TOO_MANY, error);
+ return -1;
+ }
+
+ if (port >= dev->ndev->be.num_phy_ports) {
+ NT_LOG(ERR, FILTER, "Phy port out of range");
+ flow_nic_set_error(ERR_OUTPUT_INVALID, error);
+ return -1;
+ }
+
+ /* New destination port to add */
+ fd->dst_id[fd->dst_num_avail].owning_port_id = port_id->id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_PHY;
+ fd->dst_id[fd->dst_num_avail].id = (int)port;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ if (fd->full_offload < 0)
+ fd->full_offload = 1;
+
+ *num_dest_port += 1;
+
+ NT_LOG(DBG, FILTER, "Phy port ID: %i", (int)port);
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
+ action[aidx].type);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+ }
+
+ if (!(encap_decap_order == 0 || encap_decap_order == 2)) {
+ NT_LOG(ERR, FILTER, "Invalid encap/decap actions");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int interpret_flow_elements(const struct flow_eth_dev *dev,
+ const struct rte_flow_item elem[],
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint16_t implicit_vlan_vid,
+ uint32_t *in_port_id,
+ uint32_t *packet_data,
+ uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ (void)fd;
+ (void)implicit_vlan_vid;
+
+ *in_port_id = UINT32_MAX;
+
+ memset(packet_data, 0x0, sizeof(uint32_t) * 10);
+ memset(packet_mask, 0x0, sizeof(uint32_t) * 10);
+ memset(key_def, 0x0, sizeof(struct flm_flow_key_def_s));
+
+ if (elem == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow items missing");
+ return -1;
+ }
+
+ int qw_reserved_mac = 0;
+ int qw_reserved_ipv6 = 0;
+
+ int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
+
+ if (qw_free < 0) {
+ NT_LOG(ERR, FILTER, "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ANY:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
+ (int)elem[eidx].type);
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
+ uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
+ uint32_t flm_scrub, uint32_t priority)
+{
+ (void)packet_data;
+ (void)flm_key_id;
+ (void)flm_ft;
+ (void)rpl_ext_ptr;
+ (void)flm_scrub;
+ (void)priority;
+
+ struct nic_flow_def *fd;
+ struct flow_handle fh_copy;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLOW)
+ return -1;
+
+ memcpy(&fh_copy, fh, sizeof(struct flow_handle));
+ memset(fh, 0x0, sizeof(struct flow_handle));
+ fd = fh_copy.fd;
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->caller_id = fh_copy.caller_id;
+ fh->dev = fh_copy.dev;
+ fh->next = fh_copy.next;
+ fh->prev = fh_copy.prev;
+ fh->user_data = fh_copy.user_data;
+
+ fh->flm_db_idx_counter = fh_copy.db_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+
+ free(fd);
+
+ return 0;
+}
+
+static int setup_flow_flm_actions(struct flow_eth_dev *dev,
+ const struct nic_flow_def *fd,
+ const struct hw_db_inline_qsl_data *qsl_data,
+ const struct hw_db_inline_hsh_data *hsh_data,
+ uint32_t group,
+ uint32_t local_idxs[],
+ uint32_t *local_idx_counter,
+ uint16_t *flm_rpl_ext_ptr,
+ uint32_t *flm_ft,
+ uint32_t *flm_scrub,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)fd;
+ (void)group;
+ (void)local_idxs;
+ (void)local_idx_counter;
+ (void)flm_rpl_ext_ptr;
+ (void)flm_ft;
+ (void)flm_scrub;
+ (void)qsl_data;
+ (void)hsh_data;
+ (void)error;
+
+ return 0;
+}
+
+static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct nic_flow_def *fd,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid, uint16_t caller_id,
+ struct rte_flow_error *error, uint32_t port_id,
+ uint32_t num_dest_port, uint32_t num_queues,
+ uint32_t *packet_data, uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ (void)packet_mask;
+ (void)key_def;
+ (void)forced_vlan_vid;
+ (void)num_dest_port;
+ (void)num_queues;
+ (void)packet_data;
+
+ struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLOW;
+ fh->port_id = port_id;
+ fh->dev = dev;
+ fh->fd = fd;
+ fh->caller_id = caller_id;
+
+ struct hw_db_inline_qsl_data qsl_data;
+
+ struct hw_db_inline_hsh_data hsh_data;
+
+ if (attr->group > 0 && fd_has_empty_pattern(fd)) {
+ /*
+ * Default flow for group 1..32
+ */
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, NULL, NULL, NULL, error)) {
+ goto error_out;
+ }
+
+ nic_insert_flow(dev->ndev, fh);
+
+ } else if (attr->group > 0) {
+ /*
+ * Flow for group 1..32
+ */
+
+ /* Setup Actions */
+ uint16_t flm_rpl_ext_ptr = 0;
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, &flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Program flow */
+ convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ flm_scrub, attr->priority & 0x3);
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ } else {
+ /*
+ * Flow for group 0
+ */
+ nic_insert_flow(dev->ndev, fh);
+ }
+
+ return fh;
+
+error_out:
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ } else {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ }
+
+ free(fh);
+
+ return NULL;
+}
+
/*
* Public functions
*/
@@ -82,6 +642,92 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_action action[],
struct rte_flow_error *error)
{
+ struct flow_handle *fh = NULL;
+ int res;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t num_dest_port;
+ uint32_t num_queues;
+
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct rte_flow_attr attr_local;
+ memcpy(&attr_local, attr, sizeof(struct rte_flow_attr));
+ uint16_t forced_vlan_vid_local = forced_vlan_vid;
+ uint16_t caller_id_local = caller_id;
+
+ if (attr_local.group > 0)
+ forced_vlan_vid_local = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL)
+ goto err_exit;
+
+ res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res)
+ goto err_exit;
+
+ res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
+ packet_data, packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ fd->jump_to_group, &fd->jump_to_group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (attr_local.group > 0 &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ attr_local.group, &attr_local.group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ /* Create and flush filter to NIC */
+ fh = create_flow_filter(dev, fd, &attr_local, forced_vlan_vid_local,
+ caller_id_local, error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ if (!fh)
+ goto err_exit;
+
+ NT_LOG(DBG, FILTER, "New FlOW: fh (flow handle) %p, fd (flow definition) %p", fh, fd);
+ NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
+ dev, dev->ndev->adapter_no, dev->port, fh, fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return fh;
+
+err_exit:
+
+ if (fh)
+ flow_destroy_locked_profile_inline(dev, fh, NULL);
+
+ else
+ free(fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
}
@@ -96,6 +742,44 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
flow_nic_set_error(ERR_SUCCESS, error);
+ /* take flow out of ndev list - may not have been put there yet */
+ if (fh->type == FLOW_HANDLE_TYPE_FLM)
+ nic_remove_flow_flm(dev->ndev, fh);
+
+ else
+ nic_remove_flow(dev->ndev, fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ flm_flow_programming(fh, NT_FLM_OP_UNLEARN);
+
+ } else {
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ free(fh->fd);
+ }
+
+ if (err) {
+ NT_LOG(ERR, FILTER, "FAILED removing flow: %p", fh);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ }
+
+ free(fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
return err;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 09/73] net/ntnic: add infrastructure for for flow actions and items
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (7 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 10/73] net/ntnic: add action queue Serhii Iliushyk
` (67 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add etities(utilities, structures, etc) required for flow API
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 34 ++++++++
drivers/net/ntnic/include/flow_api_engine.h | 46 +++++++++++
drivers/net/ntnic/include/hw_mod_backend.h | 33 ++++++++
drivers/net/ntnic/nthw/flow_api/flow_km.c | 81 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 69 ++++++++++++++--
5 files changed, 256 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 667dad6d5f..7f031ccda8 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -85,13 +85,47 @@ struct flow_nic_dev {
enum flow_nic_err_msg_e {
ERR_SUCCESS = 0,
ERR_FAILED = 1,
+ ERR_MEMORY = 2,
ERR_OUTPUT_TOO_MANY = 3,
+ ERR_RSS_TOO_MANY_QUEUES = 4,
+ ERR_VLAN_TYPE_NOT_SUPPORTED = 5,
+ ERR_VXLAN_HEADER_NOT_ACCEPTED = 6,
+ ERR_VXLAN_POP_INVALID_RECIRC_PORT = 7,
+ ERR_VXLAN_POP_FAILED_CREATING_VTEP = 8,
+ ERR_MATCH_VLAN_TOO_MANY = 9,
+ ERR_MATCH_INVALID_IPV6_HDR = 10,
+ ERR_MATCH_TOO_MANY_TUNNEL_PORTS = 11,
ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_FAILED_BY_HW_LIMITS = 13,
ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_MATCH_FAILED_TOO_COMPLEX = 15,
+ ERR_ACTION_REPLICATION_FAILED = 16,
+ ERR_ACTION_OUTPUT_RESOURCE_EXHAUSTION = 17,
+ ERR_ACTION_TUNNEL_HEADER_PUSH_OUTPUT_LIMIT = 18,
+ ERR_ACTION_INLINE_MOD_RESOURCE_EXHAUSTION = 19,
+ ERR_ACTION_RETRANSMIT_RESOURCE_EXHAUSTION = 20,
+ ERR_ACTION_FLOW_COUNTER_EXHAUSTION = 21,
+ ERR_ACTION_INTERNAL_RESOURCE_EXHAUSTION = 22,
+ ERR_INTERNAL_QSL_COMPARE_FAILED = 23,
+ ERR_INTERNAL_CAT_FUNC_REUSE_FAILED = 24,
+ ERR_MATCH_ENTROPHY_FAILED = 25,
+ ERR_MATCH_CAM_EXHAUSTED = 26,
+ ERR_INTERNAL_VIRTUAL_PORT_CREATION_FAILED = 27,
ERR_ACTION_UNSUPPORTED = 28,
ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_ACTION_NO_OUTPUT_DEFINED_USE_DEFAULT = 30,
+ ERR_ACTION_NO_OUTPUT_QUEUE_FOUND = 31,
+ ERR_MATCH_UNSUPPORTED_ETHER_TYPE = 32,
ERR_OUTPUT_INVALID = 33,
+ ERR_MATCH_PARTIAL_OFFLOAD_NOT_SUPPORTED = 34,
+ ERR_MATCH_CAT_CAM_EXHAUSTED = 35,
+ ERR_MATCH_KCC_KEY_CLASH = 36,
+ ERR_MATCH_CAT_CAM_FAILED = 37,
+ ERR_PARTIAL_FLOW_MARK_TOO_BIG = 38,
+ ERR_FLOW_PRIORITY_VALUE_INVALID = 39,
ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_RSS_TOO_LONG_KEY = 41,
+ ERR_ACTION_AGE_UNSUPPORTED_GROUP_0 = 42,
ERR_MSG_NO_MSG
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b8da5eafba..13fad2760a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -54,6 +54,30 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+#define MAX_MATCH_FIELDS 16
+
+struct match_elem_s {
+ int masked_for_tcam; /* if potentially selected for TCAM */
+ uint32_t e_word[4];
+ uint32_t e_mask[4];
+
+ int extr_start_offs_id;
+ int8_t rel_offs;
+ uint32_t word_len;
+};
+
+struct km_flow_def_s {
+ struct flow_api_backend_s *be;
+
+ /* For collect flow elements and sorting */
+ struct match_elem_s match[MAX_MATCH_FIELDS];
+ int num_ftype_elem;
+
+ /* Flow information */
+ /* HW input port ID needed for compare. In port must be identical on flow types */
+ uint32_t port_id;
+};
+
enum flow_port_type_e {
PORT_NONE, /* not defined or drop */
PORT_INTERNAL, /* no queues attached */
@@ -99,6 +123,25 @@ struct nic_flow_def {
uint32_t jump_to_group;
int full_offload;
+
+ /*
+ * Modify field
+ */
+ struct {
+ uint32_t select;
+ union {
+ uint8_t value8[16];
+ uint16_t value16[8];
+ uint32_t value32[4];
+ };
+ } modify_field[MAX_CPY_WRITERS_SUPPORTED];
+
+ uint32_t modify_field_count;
+
+ /*
+ * Key Matcher flow definitions
+ */
+ struct km_flow_def_s km;
};
enum flow_handle_type {
@@ -159,6 +202,9 @@ struct flow_handle {
void km_free_ndev_resource_management(void **handle);
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start, int8_t offset);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 34154c65f8..99b207a01c 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -133,6 +133,39 @@ enum km_flm_if_select_e {
unsigned int alloced_size; \
int debug
+enum {
+ PROT_OTHER = 0,
+ PROT_L2_ETH2 = 1,
+};
+
+enum {
+ PROT_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_L4_ICMP = 4
+};
+
+enum {
+ PROT_TUN_L3_OTHER = 0,
+ PROT_TUN_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_ICMP = 4
+};
+
+
+enum {
+ CPY_SELECT_DSCP_IPV4 = 0,
+ CPY_SELECT_DSCP_IPV6 = 1,
+ CPY_SELECT_RQI_QFI = 2,
+ CPY_SELECT_IPV4 = 3,
+ CPY_SELECT_PORT = 4,
+ CPY_SELECT_TEID = 5,
+};
+
struct common_func_s {
COMMON_FUNC_INFO_S;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index e04cd5e857..237e9f7b4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -3,10 +3,38 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <assert.h>
#include <stdlib.h>
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
+#include "nt_util.h"
+
+#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+
+static const struct cam_match_masks_s {
+ uint32_t word_len;
+ uint32_t key_mask[4];
+} cam_masks[] = {
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff } }, /* IP6_SRC, IP6_DST */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffff0000 } }, /* DMAC,SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffff0000, 0x00000000, 0xffff0000 } }, /* DMAC,ethtype */
+ { 4, { 0x00000000, 0x0000ffff, 0xffffffff, 0xffff0000 } }, /* SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000 } }, /* ETH_128 */
+ { 2, { 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 } }, /* IP4_COMBINED */
+ /*
+ * ETH_TYPE, IP4_TTL_PROTO, IP4_SRC, IP4_DST, IP6_FLOW_TC,
+ * IP6_NEXT_HDR_HOP, TP_PORT_COMBINED, SIDEBAND_VNI
+ */
+ { 1, { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IP4_IHL_TOS, TP_PORT_SRC32_OR_ICMP, TCP_CTRL */
+ { 1, { 0xffff0000, 0x00000000, 0x00000000, 0x00000000 } },
+ { 1, { 0x0000ffff, 0x00000000, 0x00000000, 0x00000000 } }, /* TP_PORT_DST32 */
+ /* IPv4 TOS mask bits used often by OVS */
+ { 1, { 0x00030000, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IPv6 TOS mask bits used often by OVS */
+ { 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
+};
void km_free_ndev_resource_management(void **handle)
{
@@ -17,3 +45,56 @@ void km_free_ndev_resource_management(void **handle)
*handle = NULL;
}
+
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start_id, int8_t offset)
+{
+ /* valid word_len 1,2,4 */
+ if (word_len == 3) {
+ word_len = 4;
+ e_word[3] = 0;
+ e_mask[3] = 0;
+ }
+
+ if (word_len < 1 || word_len > 4) {
+ assert(0);
+ return -1;
+ }
+
+ for (unsigned int i = 0; i < word_len; i++) {
+ km->match[km->num_ftype_elem].e_word[i] = e_word[i];
+ km->match[km->num_ftype_elem].e_mask[i] = e_mask[i];
+ }
+
+ km->match[km->num_ftype_elem].word_len = word_len;
+ km->match[km->num_ftype_elem].rel_offs = offset;
+ km->match[km->num_ftype_elem].extr_start_offs_id = start_id;
+
+ /*
+ * Determine here if this flow may better be put into TCAM
+ * Otherwise it will go into CAM
+ * This is dependent on a cam_masks list defined above
+ */
+ km->match[km->num_ftype_elem].masked_for_tcam = 1;
+
+ for (unsigned int msk = 0; msk < NUM_CAM_MASKS; msk++) {
+ if (word_len == cam_masks[msk].word_len) {
+ int match = 1;
+
+ for (unsigned int wd = 0; wd < word_len; wd++) {
+ if (e_mask[wd] != cam_masks[msk].key_mask[wd]) {
+ match = 0;
+ break;
+ }
+ }
+
+ if (match) {
+ /* Can go into CAM */
+ km->match[km->num_ftype_elem].masked_for_tcam = 0;
+ }
+ }
+ }
+
+ km->num_ftype_elem++;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index eb1f3227ed..fa40f15c0c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -418,17 +418,69 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return 0;
}
+static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def *fd,
+ const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
+ uint16_t rpl_ext_ptr, uint32_t flm_scrub, uint32_t priority)
+{
+ (void)flm_scrub;
+ switch (fd->l4_prot) {
+ case PROT_L4_ICMP:
+ fh->flm_prot = fd->ip_prot;
+ break;
+
+ default:
+ switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_ICMP:
+ fh->flm_prot = fd->tunnel_ip_prot;
+ break;
+
+ default:
+ fh->flm_prot = 0;
+ break;
+ }
+
+ break;
+ }
+
+ memcpy(fh->flm_data, packet_data, sizeof(uint32_t) * 10);
+
+ fh->flm_kid = flm_key_id;
+ fh->flm_rpl_ext_ptr = rpl_ext_ptr;
+ fh->flm_prio = (uint8_t)priority;
+ fh->flm_ft = (uint8_t)flm_ft;
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+ case CPY_SELECT_RQI_QFI:
+ fh->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ fh->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ fh->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ fh->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ fh->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+}
+
static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
uint32_t flm_scrub, uint32_t priority)
{
- (void)packet_data;
- (void)flm_key_id;
- (void)flm_ft;
- (void)rpl_ext_ptr;
- (void)flm_scrub;
- (void)priority;
-
struct nic_flow_def *fd;
struct flow_handle fh_copy;
@@ -451,6 +503,9 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
for (int i = 0; i < RES_COUNT; ++i)
fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+ copy_fd_to_fh_flm(fh, fd, packet_data, flm_key_id, flm_ft, rpl_ext_ptr, flm_scrub,
+ priority);
+
free(fd);
return 0;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 10/73] net/ntnic: add action queue
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (8 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 11/73] net/ntnic: add action mark Serhii Iliushyk
` (66 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_QUEUE
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 37 +++++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 1c653fd5a0..5b3c26da05 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,3 +18,4 @@ any = Y
[rte_flow actions]
port_id = Y
+queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index fa40f15c0c..ec22c63b85 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -22,6 +22,15 @@
static void *flm_lrn_queue_arr;
+static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
+{
+ for (int i = 0; i < dev->num_queues; ++i)
+ if (dev->rx_queue[i].id == id)
+ return dev->rx_queue[i].hw_id;
+
+ return -1;
+}
+
struct flm_flow_key_def_s {
union {
struct {
@@ -348,6 +357,34 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_queue queue_tmp;
+ const struct rte_flow_action_queue *queue =
+ memcpy_mask_if(&queue_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_queue));
+
+ int hw_id = rx_queue_idx_to_hw_id(dev, queue->index);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE port %u, queue index: %u, hw id %u",
+ dev, dev->port, queue->index, hw_id);
+
+ fd->full_offload = 0;
+ *num_queues += 1;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 11/73] net/ntnic: add action mark
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (9 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 10/73] net/ntnic: add action queue Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 12/73] net/ntnic: add ation jump Serhii Iliushyk
` (65 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_MARK
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 5b3c26da05..42ac9f9c31 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,5 +17,6 @@ x86-64 = Y
any = Y
[rte_flow actions]
+mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index ec22c63b85..350eab009e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -385,6 +385,22 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_mark mark_tmp;
+ const struct rte_flow_action_mark *mark =
+ memcpy_mask_if(&mark_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_mark));
+
+ fd->mark = mark->id;
+ NT_LOG(DBG, FILTER, "Mark: %i", mark->id);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 12/73] net/ntnic: add ation jump
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (10 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 11/73] net/ntnic: add action mark Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 13/73] net/ntnic: add action drop Serhii Iliushyk
` (64 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_JUMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 42ac9f9c31..f3334fc86d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+jump = Y
mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 350eab009e..68a54f7590 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -401,6 +401,23 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_jump jump_tmp;
+ const struct rte_flow_action_jump *jump =
+ memcpy_mask_if(&jump_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_jump));
+
+ fd->jump_to_group = jump->group;
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP: group %u",
+ dev, jump->group);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 13/73] net/ntnic: add action drop
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (11 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 12/73] net/ntnic: add ation jump Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 14/73] net/ntnic: add item eth Serhii Iliushyk
` (63 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_DROP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index f3334fc86d..372653695d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+drop = Y
jump = Y
mark = Y
port_id = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 68a54f7590..664f9c337e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -418,6 +418,18 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_DROP", dev);
+
+ if (action[aidx].conf) {
+ fd->dst_id[fd->dst_num_avail].owning_port_id = 0;
+ fd->dst_id[fd->dst_num_avail].id = 0;
+ fd->dst_id[fd->dst_num_avail].type = PORT_NONE;
+ fd->dst_num_avail++;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 14/73] net/ntnic: add item eth
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (12 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 13/73] net/ntnic: add action drop Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
` (62 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_ETH
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 23 +++
.../profile_inline/flow_api_profile_inline.c | 180 ++++++++++++++++++
3 files changed, 204 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 372653695d..36b8212bae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -15,6 +15,7 @@ x86-64 = Y
[rte_flow items]
any = Y
+eth = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 99b207a01c..0c22129fb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,29 @@ enum {
} \
} while (0)
+static inline int is_non_zero(const void *addr, size_t n)
+{
+ size_t i = 0;
+ const uint8_t *p = (const uint8_t *)addr;
+
+ for (i = 0; i < n; i++)
+ if (p[i] != 0)
+ return 1;
+
+ return 0;
+}
+
+enum frame_offs_e {
+ DYN_L2 = 1,
+ DYN_L3 = 4,
+ DYN_L4 = 7,
+ DYN_L4_PAYLOAD = 8,
+ DYN_TUN_L3 = 13,
+ DYN_TUN_L4 = 16,
+};
+
+/* Sideband info bit indicator */
+
enum km_flm_if_select_e {
KM_FLM_IF_FIRST = 0,
KM_FLM_IF_SECOND = 1
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 664f9c337e..0f47f00e64 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -54,6 +54,36 @@ struct flm_flow_key_def_s {
/*
* Flow Matcher functionality
*/
+static inline void set_key_def_qw(struct flm_flow_key_def_s *key_def, unsigned int qw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(qw < 2);
+
+ if (qw == 0) {
+ key_def->qw0_dyn = dyn & 0x7f;
+ key_def->qw0_ofs = ofs & 0xff;
+
+ } else {
+ key_def->qw4_dyn = dyn & 0x7f;
+ key_def->qw4_ofs = ofs & 0xff;
+ }
+}
+
+static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned int sw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(sw < 2);
+
+ if (sw == 0) {
+ key_def->sw8_dyn = dyn & 0x7f;
+ key_def->sw8_ofs = ofs & 0xff;
+
+ } else {
+ key_def->sw9_dyn = dyn & 0x7f;
+ key_def->sw9_ofs = ofs & 0xff;
+ }
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -459,6 +489,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
(void)fd;
(void)implicit_vlan_vid;
+ uint32_t any_count = 0;
+
+ unsigned int qw_counter = 0;
+ unsigned int sw_counter = 0;
+
*in_port_id = UINT32_MAX;
memset(packet_data, 0x0, sizeof(uint32_t) * 10);
@@ -474,6 +509,28 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH: {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (eth_spec != NULL && eth_mask != NULL) {
+ if (is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6)) {
+ qw_reserved_mac += 1;
+ }
+ }
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
if (qw_free < 0) {
@@ -486,6 +543,129 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
switch (elem[eidx].type) {
case RTE_FLOW_ITEM_TYPE_ANY:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ any_count += 1;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ETH",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (any_count > 0) {
+ NT_LOG(ERR, FILTER,
+ "Tunneled L2 ethernet not supported");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (eth_spec == NULL || eth_mask == NULL) {
+ fd->l2_prot = PROT_L2_ETH2;
+ break;
+ }
+
+ int non_zero = is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6);
+
+ if (non_zero ||
+ (eth_mask->ether_type != 0 && sw_counter >= 2)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ((eth_spec->dst_addr.addr_bytes[0] &
+ eth_mask->dst_addr.addr_bytes[0]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[1] &
+ eth_mask->dst_addr.addr_bytes[1]) << 16) +
+ ((eth_spec->dst_addr.addr_bytes[2] &
+ eth_mask->dst_addr.addr_bytes[2]) << 8) +
+ (eth_spec->dst_addr.addr_bytes[3] &
+ eth_mask->dst_addr.addr_bytes[3]);
+
+ qw_data[1] = ((eth_spec->dst_addr.addr_bytes[4] &
+ eth_mask->dst_addr.addr_bytes[4]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[5] &
+ eth_mask->dst_addr.addr_bytes[5]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[0] &
+ eth_mask->src_addr.addr_bytes[0]) << 8) +
+ (eth_spec->src_addr.addr_bytes[1] &
+ eth_mask->src_addr.addr_bytes[1]);
+
+ qw_data[2] = ((eth_spec->src_addr.addr_bytes[2] &
+ eth_mask->src_addr.addr_bytes[2]) << 24) +
+ ((eth_spec->src_addr.addr_bytes[3] &
+ eth_mask->src_addr.addr_bytes[3]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[4] &
+ eth_mask->src_addr.addr_bytes[4]) << 8) +
+ (eth_spec->src_addr.addr_bytes[5] &
+ eth_mask->src_addr.addr_bytes[5]);
+
+ qw_data[3] = ntohs(eth_spec->ether_type &
+ eth_mask->ether_type) << 16;
+
+ qw_mask[0] = (eth_mask->dst_addr.addr_bytes[0] << 24) +
+ (eth_mask->dst_addr.addr_bytes[1] << 16) +
+ (eth_mask->dst_addr.addr_bytes[2] << 8) +
+ eth_mask->dst_addr.addr_bytes[3];
+
+ qw_mask[1] = (eth_mask->dst_addr.addr_bytes[4] << 24) +
+ (eth_mask->dst_addr.addr_bytes[5] << 16) +
+ (eth_mask->src_addr.addr_bytes[0] << 8) +
+ eth_mask->src_addr.addr_bytes[1];
+
+ qw_mask[2] = (eth_mask->src_addr.addr_bytes[2] << 24) +
+ (eth_mask->src_addr.addr_bytes[3] << 16) +
+ (eth_mask->src_addr.addr_bytes[4] << 8) +
+ eth_mask->src_addr.addr_bytes[5];
+
+ qw_mask[3] = ntohs(eth_mask->ether_type) << 16;
+
+ km_add_match_elem(&fd->km,
+ &qw_data[(size_t)(qw_counter * 4)],
+ &qw_mask[(size_t)(qw_counter * 4)], 4, DYN_L2, 0);
+ set_key_def_qw(key_def, qw_counter, DYN_L2, 0);
+ qw_counter += 1;
+
+ if (!non_zero)
+ qw_free -= 1;
+
+ } else if (eth_mask->ether_type != 0) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(eth_mask->ether_type) << 16;
+ sw_data[0] = ntohs(eth_spec->ether_type) << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, DYN_L2, 12);
+ set_key_def_sw(key_def, sw_counter, DYN_L2, 12);
+ sw_counter += 1;
+ }
+
+ fd->l2_prot = PROT_L2_ETH2;
+ }
+
+ break;
+
dev->ndev->adapter_no, dev->port);
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 15/73] net/ntnic: add item IPv4
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (13 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 14/73] net/ntnic: add item eth Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 16/73] net/ntnic: add item ICMP Serhii Iliushyk
` (61 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 162 ++++++++++++++++++
2 files changed, 163 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 36b8212bae..bae25d2e2d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+ipv4 = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0f47f00e64..aa1b5cf15d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -666,7 +666,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv4 *ipv4_spec =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
+
+ if (ipv4_spec == NULL || ipv4_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.version_ihl != 0 ||
+ ipv4_mask->hdr.type_of_service != 0 ||
+ ipv4_mask->hdr.total_length != 0 ||
+ ipv4_mask->hdr.packet_id != 0 ||
+ (ipv4_mask->hdr.fragment_offset != 0 &&
+ (ipv4_spec->hdr.fragment_offset != 0xffff ||
+ ipv4_mask->hdr.fragment_offset != 0xffff)) ||
+ ipv4_mask->hdr.time_to_live != 0 ||
+ ipv4_mask->hdr.hdr_checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv4 field not support by running SW version.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (ipv4_spec->hdr.fragment_offset == 0xffff &&
+ ipv4_mask->hdr.fragment_offset == 0xffff) {
+ fd->fragmentation = 0xfe;
+ }
+
+ int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
+ (ipv4_mask->hdr.dst_addr != 0) +
+ (ipv4_mask->hdr.next_proto_id != 0);
+
+ if (match_cnt <= 0) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (qw_free > 0 &&
+ (match_cnt >= 2 ||
+ (match_cnt == 1 && sw_counter >= 2))) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED,
+ error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_mask[0] = 0;
+ qw_data[0] = 0;
+
+ qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
+ qw_data[1] = ipv4_spec->hdr.next_proto_id
+ << 16 & qw_mask[1];
+
+ qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
+ qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
+
+ qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
+ qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.src_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 12);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.dst_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 16);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.next_proto_id) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
+ sw_data[0] = ipv4_spec->hdr.next_proto_id
+ << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ sw_counter += 1;
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 16/73] net/ntnic: add item ICMP
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (14 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 17/73] net/ntnic: add item port ID Serhii Iliushyk
` (60 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_ICMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 101 ++++++++++++++++++
2 files changed, 102 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index bae25d2e2d..d403ea01f3 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+icmp = Y
ipv4 = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index aa1b5cf15d..8862ac2a0e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -829,6 +829,107 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp *icmp_spec =
+ (const struct rte_flow_item_icmp *)elem[eidx].spec;
+ const struct rte_flow_item_icmp *icmp_mask =
+ (const struct rte_flow_item_icmp *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->hdr.icmp_cksum != 0 ||
+ icmp_mask->hdr.icmp_ident != 0 ||
+ icmp_mask->hdr.icmp_seq_nb != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->hdr.icmp_type || icmp_mask->hdr.icmp_code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ sw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter,
+ any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 17/73] net/ntnic: add item port ID
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (15 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 16/73] net/ntnic: add item ICMP Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 18/73] net/ntnic: add item void Serhii Iliushyk
` (59 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_PORT_ID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../flow_api/profile_inline/flow_api_profile_inline.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index d403ea01f3..cdf119c4ae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,6 +18,7 @@ any = Y
eth = Y
icmp = Y
ipv4 = Y
+port_id = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 8862ac2a0e..6c716695bd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -930,6 +930,17 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
+ dev->ndev->adapter_no, dev->port);
+
+ if (elem[eidx].spec) {
+ *in_port_id =
+ ((const struct rte_flow_item_port_id *)elem[eidx].spec)->id;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 18/73] net/ntnic: add item void
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (16 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 17/73] net/ntnic: add item port ID Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 19/73] net/ntnic: add item UDP Serhii Iliushyk
` (58 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_VOID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../nthw/flow_api/profile_inline/flow_api_profile_inline.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6c716695bd..4681b1b176 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -941,6 +941,10 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_VOID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VOID",
+ dev->ndev->adapter_no, dev->port);
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 19/73] net/ntnic: add item UDP
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (17 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 18/73] net/ntnic: add item void Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 20/73] net/ntnic: add action TCP Serhii Iliushyk
` (57 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_UDP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 103 ++++++++++++++++++
3 files changed, 106 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index cdf119c4ae..61a3d87909 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+udp = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 0c22129fb4..a95fb69870 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 4681b1b176..60f5ac92aa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -830,6 +830,101 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_udp *udp_spec =
+ (const struct rte_flow_item_udp *)elem[eidx].spec;
+ const struct rte_flow_item_udp *udp_mask =
+ (const struct rte_flow_item_udp *)elem[eidx].mask;
+
+ if (udp_spec == NULL || udp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (udp_mask->hdr.dgram_len != 0 ||
+ udp_mask->hdr.dgram_cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested UDP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (udp_mask->hdr.src_port || udp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(udp_mask->hdr.src_port) << 16) |
+ ntohs(udp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(udp_mask->hdr.src_port)
+ << 16) | ntohs(udp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -964,12 +1059,20 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
{
(void)flm_scrub;
switch (fd->l4_prot) {
+ case PROT_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 20/73] net/ntnic: add action TCP
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (18 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 19/73] net/ntnic: add item UDP Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 21/73] net/ntnic: add action VLAN Serhii Iliushyk
` (56 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_TCP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 108 ++++++++++++++++++
3 files changed, 111 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 61a3d87909..e3c3982895 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+tcp = Y
udp = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a95fb69870..a1aa74caf5 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -177,6 +178,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 60f5ac92aa..1f076af959 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1026,6 +1026,106 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_tcp *tcp_spec =
+ (const struct rte_flow_item_tcp *)elem[eidx].spec;
+ const struct rte_flow_item_tcp *tcp_mask =
+ (const struct rte_flow_item_tcp *)elem[eidx].mask;
+
+ if (tcp_spec == NULL || tcp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (tcp_mask->hdr.sent_seq != 0 ||
+ tcp_mask->hdr.recv_ack != 0 ||
+ tcp_mask->hdr.data_off != 0 ||
+ tcp_mask->hdr.tcp_flags != 0 ||
+ tcp_mask->hdr.rx_win != 0 ||
+ tcp_mask->hdr.cksum != 0 ||
+ tcp_mask->hdr.tcp_urp != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested TCP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (tcp_mask->hdr.src_port || tcp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ sw_data[0] =
+ ((ntohs(tcp_spec->hdr.src_port) << 16) |
+ ntohs(tcp_spec->hdr.dst_port)) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(tcp_spec->hdr.src_port)
+ << 16) | ntohs(tcp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1059,6 +1159,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
{
(void)flm_scrub;
switch (fd->l4_prot) {
+ case PROT_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_L4_UDP:
fh->flm_prot = 17;
break;
@@ -1069,6 +1173,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_TUN_L4_UDP:
fh->flm_prot = 17;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 21/73] net/ntnic: add action VLAN
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (19 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 20/73] net/ntnic: add action TCP Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 22/73] net/ntnic: add item SCTP Serhii Iliushyk
` (55 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_VLAN
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 96 ++++++++++++++++++-
3 files changed, 96 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e3c3982895..8b4821d6d0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -21,6 +21,7 @@ ipv4 = Y
port_id = Y
tcp = Y
udp = Y
+vlan = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a1aa74caf5..82ac3d0ff3 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -134,6 +134,7 @@ static inline int is_non_zero(const void *addr, size_t n)
enum frame_offs_e {
DYN_L2 = 1,
+ DYN_FIRST_VLAN = 2,
DYN_L3 = 4,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1f076af959..cd5917ec42 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -486,8 +486,6 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
- (void)fd;
- (void)implicit_vlan_vid;
uint32_t any_count = 0;
@@ -506,6 +504,20 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return -1;
}
+ if (implicit_vlan_vid > 0) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = 0x0fff;
+ sw_data[0] = implicit_vlan_vid & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1, DYN_FIRST_VLAN, 0);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN, 0);
+ sw_counter += 1;
+
+ fd->vlans += 1;
+ }
+
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
@@ -666,6 +678,86 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VLAN",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_vlan_hdr *vlan_spec =
+ (const struct rte_vlan_hdr *)elem[eidx].spec;
+ const struct rte_vlan_hdr *vlan_mask =
+ (const struct rte_vlan_hdr *)elem[eidx].mask;
+
+ if (vlan_spec == NULL || vlan_mask == NULL) {
+ fd->vlans += 1;
+ break;
+ }
+
+ if (!vlan_mask->vlan_tci && !vlan_mask->eth_proto)
+ break;
+
+ if (implicit_vlan_vid > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple VLANs not supported for implicit VLAN patterns.");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM,
+ error);
+ return -1;
+ }
+
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ sw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_qw(key_def, qw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ fd->vlans += 1;
+ }
+
+ break;
case RTE_FLOW_ITEM_TYPE_IPV4:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 22/73] net/ntnic: add item SCTP
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (20 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 21/73] net/ntnic: add action VLAN Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
` (54 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_SCTP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 102 ++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b4821d6d0..6691b6dce2 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+sctp = Y
tcp = Y
udp = Y
vlan = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 82ac3d0ff3..f1c57fa9fc 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -169,6 +169,7 @@ enum {
enum {
PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
+ PROT_L4_SCTP = 3,
PROT_L4_ICMP = 4
};
@@ -181,6 +182,7 @@ enum {
PROT_TUN_L4_OTHER = 0,
PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
+ PROT_TUN_L4_SCTP = 3,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index cd5917ec42..9e680f44e1 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1017,6 +1017,100 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ NT_LOG(DBG, FILTER, "Adap %i,Port %i:RTE_FLOW_ITEM_TYPE_SCTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_sctp *sctp_spec =
+ (const struct rte_flow_item_sctp *)elem[eidx].spec;
+ const struct rte_flow_item_sctp *sctp_mask =
+ (const struct rte_flow_item_sctp *)elem[eidx].mask;
+
+ if (sctp_spec == NULL || sctp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (sctp_mask->hdr.tag != 0 || sctp_mask->hdr.cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested SCTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (sctp_mask->hdr.src_port || sctp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -1259,6 +1353,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
@@ -1273,6 +1371,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_TUN_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 23/73] net/ntnic: add items IPv6 and ICMPv6
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (21 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 22/73] net/ntnic: add item SCTP Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 24/73] net/ntnic: add action modify filed Serhii Iliushyk
` (53 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_IPV6
* RTE_FLOW_ITEM_TYPE_ICMP6
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 27 ++
.../profile_inline/flow_api_profile_inline.c | 273 ++++++++++++++++++
4 files changed, 304 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 6691b6dce2..320d3c7e0b 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,7 +17,9 @@ x86-64 = Y
any = Y
eth = Y
icmp = Y
+icmp6 = Y
ipv4 = Y
+ipv6 = Y
port_id = Y
sctp = Y
tcp = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index f1c57fa9fc..4f381bc0ef 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -164,6 +164,7 @@ enum {
enum {
PROT_L3_IPV4 = 1,
+ PROT_L3_IPV6 = 2
};
enum {
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
+ PROT_TUN_L3_IPV6 = 2
};
enum {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 4139d42c8c..a366f17e08 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -47,6 +47,33 @@ static const struct {
} err_msg[] = {
/* 00 */ { "Operation successfully completed" },
/* 01 */ { "Operation failed" },
+ /* 02 */ { "Memory allocation failed" },
+ /* 03 */ { "Too many output destinations" },
+ /* 04 */ { "Too many output queues for RSS" },
+ /* 05 */ { "The VLAN TPID specified is not supported" },
+ /* 06 */ { "The VxLan Push header specified is not accepted" },
+ /* 07 */ { "While interpreting VxLan Pop action, could not find a destination port" },
+ /* 08 */ { "Failed in creating a HW-internal VTEP port" },
+ /* 09 */ { "Too many VLAN tag matches" },
+ /* 10 */ { "IPv6 invalid header specified" },
+ /* 11 */ { "Too many tunnel ports. HW limit reached" },
+ /* 12 */ { "Unknown or unsupported flow match element received" },
+ /* 13 */ { "Match failed because of HW limitations" },
+ /* 14 */ { "Match failed because of HW resource limitations" },
+ /* 15 */ { "Match failed because of too complex element definitions" },
+ /* 16 */ { "Action failed. To too many output destinations" },
+ /* 17 */ { "Action Output failed, due to HW resource exhaustion" },
+ /* 18 */ { "Push Tunnel Header action cannot output to multiple destination queues" },
+ /* 19 */ { "Inline action HW resource exhaustion" },
+ /* 20 */ { "Action retransmit/recirculate HW resource exhaustion" },
+ /* 21 */ { "Flow counter HW resource exhaustion" },
+ /* 22 */ { "Internal HW resource exhaustion to handle Actions" },
+ /* 23 */ { "Internal HW QSL compare failed" },
+ /* 24 */ { "Internal CAT CFN reuse failed" },
+ /* 25 */ { "Match variations too complex" },
+ /* 26 */ { "Match failed because of CAM/TCAM full" },
+ /* 27 */ { "Internal creation of a tunnel end point port failed" },
+ /* 28 */ { "Unknown or unsupported flow action received" },
/* 29 */ { "Removing flow failed" },
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9e680f44e1..5041b28f14 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -538,6 +538,22 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6: {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec != NULL && ipv6_mask != NULL) {
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16))
+ qw_reserved_ipv6 += 1;
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16))
+ qw_reserved_ipv6 += 1;
+ }
+ }
+ break;
+
default:
break;
}
@@ -922,6 +938,164 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec == NULL || ipv6_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ break;
+ }
+
+ fd->l3_prot = PROT_L3_IPV6;
+ if (ipv6_mask->hdr.vtc_flow != 0 ||
+ ipv6_mask->hdr.payload_len != 0 ||
+ ipv6_mask->hdr.hop_limits != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv6 field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.src_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.src_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ qw_counter += 1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.dst_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 24);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 24);
+ qw_counter += 1;
+ }
+
+ if (ipv6_mask->hdr.proto != 0) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv6_mask->hdr.proto << 8;
+ sw_data[0] = ipv6_spec->hdr.proto << 8 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = 0;
+ qw_data[1] = ipv6_mask->hdr.proto << 8;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = 0;
+ qw_mask[1] = ipv6_spec->hdr.proto << 8;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_UDP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
dev->ndev->adapter_no, dev->port);
@@ -1212,6 +1386,105 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp6 *icmp_spec =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].spec;
+ const struct rte_flow_item_icmp6 *icmp_mask =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP6 field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->type || icmp_mask->code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ sw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_TCP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 24/73] net/ntnic: add action modify filed
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (22 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
` (52 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 181 ++++++++++++++++++
4 files changed, 190 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 320d3c7e0b..4201c8e8b9 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -30,5 +30,6 @@ vlan = Y
drop = Y
jump = Y
mark = Y
+modify_field = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 13fad2760a..f6557d0d20 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,10 @@ struct nic_flow_def {
*/
struct {
uint32_t select;
+ uint32_t dyn;
+ uint32_t ofs;
+ uint32_t len;
+ uint32_t level;
union {
uint8_t value8[16];
uint16_t value16[8];
@@ -137,6 +141,9 @@ struct nic_flow_def {
} modify_field[MAX_CPY_WRITERS_SUPPORTED];
uint32_t modify_field_count;
+ uint8_t ttl_sub_enable;
+ uint8_t ttl_sub_ipv4;
+ uint8_t ttl_sub_outer;
/*
* Key Matcher flow definitions
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 4f381bc0ef..6a8a38636f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -140,6 +140,7 @@ enum frame_offs_e {
DYN_L4_PAYLOAD = 8,
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
+ DYN_TUN_L4_PAYLOAD = 17,
};
/* Sideband info bit indicator */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5041b28f14..24476df817 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -322,6 +322,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
{
unsigned int encap_decap_order = 0;
+ uint64_t modify_field_use_flags = 0x0;
+
*num_dest_port = 0;
*num_queues = 0;
@@ -460,6 +462,185 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
+ {
+ /* Note: This copy method will not work for FLOW_FIELD_POINTER */
+ struct rte_flow_action_modify_field modify_field_tmp;
+ const struct rte_flow_action_modify_field *modify_field =
+ memcpy_mask_if(&modify_field_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_modify_field));
+
+ uint64_t modify_field_use_flag = 0;
+
+ if (modify_field->src.field != RTE_FLOW_FIELD_VALUE) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only src type VALUE is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.level > 2) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only dst level 0, 1, and 2 is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL ||
+ modify_field->dst.field == RTE_FLOW_FIELD_IPV6_HOPLIMIT) {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SUB) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SUB is supported for TTL/HOPLIMIT.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->ttl_sub_enable) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD TTL/HOPLIMIT resource already in use.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->ttl_sub_enable = 1;
+ fd->ttl_sub_ipv4 =
+ (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL)
+ ? 1
+ : 0;
+ fd->ttl_sub_outer = (modify_field->dst.level <= 1) ? 1 : 0;
+
+ } else {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SET) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SET is supported in general.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->modify_field_count >=
+ dev->ndev->be.tpe.nb_cpy_writers) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD exceeded maximum of %u MODIFY_FIELD actions.",
+ dev->ndev->be.tpe.nb_cpy_writers);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ int mod_outer = modify_field->dst.level <= 1;
+
+ switch (modify_field->dst.field) {
+ case RTE_FLOW_FIELD_IPV4_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 1;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV6_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV6;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ /*
+ * len=2 is needed because
+ * IPv6 DSCP overlaps 2 bytes.
+ */
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_PSC_QFI:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_RQI_QFI;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 14;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 12;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 16;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_SRC:
+ case RTE_FLOW_FIELD_UDP_PORT_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_DST:
+ case RTE_FLOW_FIELD_UDP_PORT_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 2;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_TEID:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_TEID;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 4;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type is not supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ modify_field_use_flag = 1
+ << fd->modify_field[fd->modify_field_count].select;
+
+ if (modify_field_use_flag & modify_field_use_flags) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type hardware resource already used.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ memcpy(fd->modify_field[fd->modify_field_count].value8,
+ modify_field->src.value, 16);
+
+ fd->modify_field[fd->modify_field_count].level =
+ modify_field->dst.level;
+
+ modify_field_use_flags |= modify_field_use_flag;
+ fd->modify_field_count += 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 25/73] net/ntnic: add items gtp and actions raw encap/decap
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (23 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 24/73] net/ntnic: add action modify filed Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 26/73] net/ntnic: add cat module Serhii Iliushyk
` (51 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_GTP
* RTE_FLOW_ITEM_TYPE_GTP_PSC
* RTE_FLOW_ACTION_TYPE_RAW_ENCAP
* RTE_FLOW_ACTION_TYPE_RAW_DECAP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 4 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/flow_api_engine.h | 40 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/include/stream_binary_flow_api.h | 22 ++
.../profile_inline/flow_api_profile_inline.c | 365 +++++++++++++++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 274 ++++++++++++-
7 files changed, 708 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4201c8e8b9..4cb9509742 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,8 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+gtp = Y
+gtp_psc = Y
icmp = Y
icmp6 = Y
ipv4 = Y
@@ -33,3 +35,5 @@ mark = Y
modify_field = Y
port_id = Y
queue = Y
+raw_decap = Y
+raw_encap = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 179542d2b2..70e6cad195 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,8 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct flow_action_raw_encap encap;
+ struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
};
@@ -52,6 +54,8 @@ enum nt_rte_flow_item_type {
};
extern rte_spinlock_t flow_lock;
+
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out);
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index f6557d0d20..b1d39b919b 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -56,6 +56,29 @@ enum res_type_e {
#define MAX_MATCH_FIELDS 16
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+ uint32_t user_port_id;
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+ uint16_t ip_csum_precalc;
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
+};
+
struct match_elem_s {
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
@@ -124,6 +147,23 @@ struct nic_flow_def {
int full_offload;
+ /*
+ * Action push tunnel
+ */
+ struct tunnel_header_s tun_hdr;
+
+ /*
+ * If DPDK RTE tunnel helper API used
+ * this holds the tunnel if used in flow
+ */
+ struct tunnel_s *tnl;
+
+ /*
+ * Header Stripper
+ */
+ int header_strip_end_dyn;
+ int header_strip_end_ofs;
+
/*
* Modify field
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6a8a38636f..1b45ea4296 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -175,6 +175,10 @@ enum {
PROT_L4_ICMP = 4
};
+enum {
+ PROT_TUN_GTPV1U = 6,
+};
+
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index d878b848c2..8097518d61 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -18,6 +18,7 @@
#define FLOW_MAX_QUEUES 128
+#define RAW_ENCAP_DECAP_ELEMS_MAX 16
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
@@ -31,6 +32,27 @@ struct flow_queue_id_s {
int hw_id;
};
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ */
+struct flow_action_raw_encap {
+ uint8_t *data;
+ uint8_t *preserve;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ */
+struct flow_action_raw_decap {
+ uint8_t *data;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
struct flow_eth_dev; /* port device */
struct flow_handle;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 24476df817..af4763ea3f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -462,6 +462,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
+
+ if (action[aidx].conf) {
+ const struct flow_action_raw_encap *encap =
+ (const struct flow_action_raw_encap *)action[aidx].conf;
+ const struct flow_action_raw_encap *encap_mask = action_mask
+ ? (const struct flow_action_raw_encap *)action_mask[aidx]
+ .conf
+ : NULL;
+ const struct rte_flow_item *items = encap->items;
+
+ if (encap_decap_order != 1) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (encap->size == 0 || encap->size > 255 ||
+ encap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP data/size invalid.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 2;
+
+ fd->tun_hdr.len = (uint8_t)encap->size;
+
+ if (encap_mask) {
+ memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data,
+ encap_mask->data, fd->tun_hdr.len);
+
+ } else {
+ memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len);
+ }
+
+ while (items->type != RTE_FLOW_ITEM_TYPE_END) {
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ fd->tun_hdr.l2_len = 14;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->tun_hdr.nb_vlans += 1;
+ fd->tun_hdr.l2_len += 4;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ fd->tun_hdr.ip_version = 4;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->tun_hdr.ip_version = 6;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr);
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->tun_hdr.l4_len =
+ sizeof(struct rte_flow_item_icmp6);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 3] = 0xfd;
+ break;
+
+ default:
+ break;
+ }
+
+ items++;
+ }
+
+ if (fd->tun_hdr.nb_vlans > 3) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Encapsulation with %d vlans not supported.",
+ (int)fd->tun_hdr.nb_vlans);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ /* Convert encap data to 128-bit little endian */
+ for (size_t i = 0; i < (encap->size + 15) / 16; ++i) {
+ uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16;
+
+ for (unsigned int j = 0; j < 8; ++j) {
+ uint8_t t = data[j];
+ data[j] = data[15 - j];
+ data[15 - j] = t;
+ }
+ }
+ }
+
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev);
+
+ if (action[aidx].conf) {
+ /* Mask is N/A for RAW_DECAP */
+ const struct flow_action_raw_decap *decap =
+ (const struct flow_action_raw_decap *)action[aidx].conf;
+
+ if (encap_decap_order != 0) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (decap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_DECAP must decap something.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 1;
+
+ switch (decap->items[decap->item_count - 2].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->header_strip_end_dyn = DYN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->header_strip_end_dyn = DYN_L4;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->header_strip_end_dyn = DYN_L4_PAYLOAD;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ fd->header_strip_end_dyn = DYN_TUN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ default:
+ fd->header_strip_end_dyn = DYN_L2;
+ fd->header_strip_end_ofs = 0;
+ break;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
{
@@ -1766,6 +1962,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_hdr *gtp_spec =
+ (const struct rte_gtp_hdr *)elem[eidx].spec;
+ const struct rte_gtp_hdr *gtp_mask =
+ (const struct rte_gtp_hdr *)elem[eidx].mask;
+
+ if (gtp_spec == NULL || gtp_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_mask->gtp_hdr_info != 0 ||
+ gtp_mask->msg_type != 0 || gtp_mask->plen != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_mask->teid) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_mask->teid);
+ sw_data[0] =
+ ntohl(gtp_spec->teid) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_spec->teid);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_mask->teid);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP_PSC:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_spec =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec;
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_mask =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask;
+
+ if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_psc_mask->type != 0 ||
+ gtp_psc_mask->ext_hdr_len != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP PSC field is not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_psc_mask->qfi) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ sw_data[0] = ntohl(gtp_psc_spec->qfi) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 14);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_psc_spec->qfi);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 14);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1950,7 +2314,6 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
(void)forced_vlan_vid;
(void)num_dest_port;
(void)num_queues;
- (void)packet_data;
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 83ca52a2ad..df391b6399 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -16,6 +16,211 @@
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out)
+{
+ int hdri = 0;
+ int pkti = 0;
+
+ /* Ethernet */
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_ether_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ether_hdr);
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* VLAN */
+ while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) {
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_vlan_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_vlan_hdr);
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 3 */
+ uint8_t next_header = 0;
+
+ if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) {
+ if (size - pkti < (int)sizeof(struct rte_ipv4_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 9];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv4_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 4 */
+ int gtpu_encap = 0;
+
+ if (next_header == 1) { /* ICMP */
+ if (size - pkti < (int)sizeof(struct rte_icmp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 58) { /* ICMP6 */
+ if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 6) { /* TCP */
+ if (size - pkti < (int)sizeof(struct rte_tcp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_tcp_hdr);
+
+ } else if (next_header == 17) { /* UDP */
+ if (size - pkti < (int)sizeof(struct rte_udp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port ==
+ rte_cpu_to_be_16(RTE_GTPU_UDP_PORT);
+
+ hdri += 1;
+ pkti += sizeof(struct rte_udp_hdr);
+
+ } else if (next_header == 132) {/* SCTP */
+ if (size - pkti < (int)sizeof(struct rte_sctp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_sctp_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* GTPv1-U */
+ if (gtpu_encap) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti])
+ ->e;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr);
+
+ if (extension_present_bit) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti])
+ ->next_ext;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr_ext_word);
+
+ while (next_ext) {
+ size_t ext_len = data[pkti] * 4;
+
+ if (size - pkti < (int)ext_len)
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_ext = data[pkti + ext_len - 1];
+
+ hdri += 1;
+ pkti += ext_len;
+ }
+ }
+ }
+
+ if (size - pkti != 0)
+ return -1;
+
+interpret_end:
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_END;
+ out[hdri].spec = NULL;
+ out[hdri].mask = NULL;
+
+ return hdri + 1;
+}
+
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
{
if (error) {
@@ -100,12 +305,73 @@ int create_action_elements_inline(struct cnv_action_s *action,
int max_elem,
uint32_t queue_offset)
{
- (void)action;
- (void)actions;
- (void)max_elem;
- (void)queue_offset;
+ int aidx = 0;
int type = -1;
+ do {
+ type = actions[aidx].type;
+ if (type >= 0) {
+ action->flow_actions[aidx].type = type;
+
+ /*
+ * Non-compatible actions handled here
+ */
+ switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
+ const struct rte_flow_action_raw_decap *decap =
+ (const struct rte_flow_action_raw_decap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(decap->data, NULL, decap->size,
+ action->decap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->decap.data = decap->data;
+ action->decap.size = decap->size;
+ action->decap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->decap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: {
+ const struct rte_flow_action_raw_encap *encap =
+ (const struct rte_flow_action_raw_encap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(encap->data, encap->preserve,
+ encap->size, action->encap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->encap.data = encap->data;
+ action->encap.preserve = encap->preserve;
+ action->encap.size = encap->size;
+ action->encap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->encap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE: {
+ const struct rte_flow_action_queue *queue =
+ (const struct rte_flow_action_queue *)actions[aidx].conf;
+ action->queue.index = queue->index + queue_offset;
+ action->flow_actions[aidx].conf = &action->queue;
+ }
+ break;
+
+ default: {
+ action->flow_actions[aidx].conf = actions[aidx].conf;
+ }
+ break;
+ }
+
+ aidx++;
+
+ if (aidx == max_elem)
+ return -1;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
return (type >= 0) ? 0 : -1;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 26/73] net/ntnic: add cat module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (24 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
` (50 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Categorizer module’s main purpose is to is select the behavior
of other modules in the FPGA pipeline depending on a protocol check.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 24 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 267 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 165 +++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 47 +++
.../profile_inline/flow_api_profile_inline.c | 83 ++++++
5 files changed, 586 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 1b45ea4296..87fc16ecb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -315,11 +315,35 @@ int hw_mod_cat_reset(struct flow_api_backend_s *be);
int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
+/* KCE/KCS/FTE KM */
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+/* KCE/KCS/FTE FLM */
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count);
+
int hw_mod_cat_kcc_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_exo_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index d266760123..9164ec1ae0 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -951,6 +951,97 @@ static int hw_mod_cat_fte_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_fte_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_fte_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ const uint32_t key_cnt = (_VER_ >= 20) ? 4 : 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8 * be->cat.nb_flow_types * key_cnt)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v18.fte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v21.fte[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, value, 1);
+}
+
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -964,6 +1055,45 @@ int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cte_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cte_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTE_ENABLE_BM:
+ GET_SET(be->cat.v18.cte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -979,6 +1109,51 @@ int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cts_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cts_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ int addr_size = (be->cat.cts_num + 1) / 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs * addr_size)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTS_CAT_A:
+ GET_SET(be->cat.v18.cts[index].cat_a, value);
+ break;
+
+ case HW_CAT_CTS_CAT_B:
+ GET_SET(be->cat.v18.cts[index].cat_b, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -992,6 +1167,98 @@ int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cot_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cot_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_COT_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->cat.v18.cot[index], (uint8_t)*value,
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->cat.v18.cot, struct cat_v18_cot_s, index, *value);
+ break;
+
+ case HW_CAT_COT_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->cat.v18.cot, struct cat_v18_cot_s, index, *value,
+ be->max_categories);
+ break;
+
+ case HW_CAT_COT_COPY_FROM:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memcpy(&be->cat.v18.cot[index], &be->cat.v18.cot[*value],
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COLOR:
+ GET_SET(be->cat.v18.cot[index].color, value);
+ break;
+
+ case HW_CAT_COT_KM:
+ GET_SET(be->cat.v18.cot[index].km, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cot_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index eb6bad07b8..4d5bcbef49 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -21,6 +21,14 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
+ /* Items */
+ struct hw_db_inline_resource_db_cat {
+ struct hw_db_inline_cat_data data;
+ int ref;
+ } *cat;
+
+ uint32_t nb_cat;
+
/* Hardware */
struct hw_db_inline_resource_db_cfn {
@@ -46,6 +54,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_cat = ndev->be.cat.nb_cat_funcs;
+ db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
+
+ if (db->cat == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -55,6 +71,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->cat);
free(db->cfn);
@@ -69,6 +86,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_CAT:
+ hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_COT:
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
@@ -79,6 +100,69 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+/******************************************************************************/
+/* Filter */
+/******************************************************************************/
+
+/*
+ * Setup a filter to match:
+ * All packets in CFN checks
+ * All packets in KM
+ * All packets in FLM with look-up C FT equal to specified argument
+ *
+ * Setup a QSL recipe to DROP all matching packets
+ *
+ * Note: QSL recipe 0 uses DISCARD in order to allow for exception paths (UNMQ)
+ * Consequently another QSL recipe with hard DROP is needed
+ */
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id)
+{
+ (void)ft;
+ (void)qsl_hw_id;
+
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+ (void)offset;
+
+ /* Select and enable QSL recipe */
+ if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
+ return -1;
+
+ if (hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6))
+ return -1;
+
+ if (hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0x8))
+ return -1;
+
+ if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ /* Make all CFN checks TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, 0x0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x1))
+ return -1;
+
+ /* Final match: look-up_A == TRUE && look-up_C == TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3))
+ return -1;
+
+ if (hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ return 0;
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -149,3 +233,84 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cot[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* CAT */
+/******************************************************************************/
+
+static int hw_db_inline_cat_compare(const struct hw_db_inline_cat_data *data1,
+ const struct hw_db_inline_cat_data *data2)
+{
+ return data1->vlan_mask == data2->vlan_mask &&
+ data1->mac_port_mask == data2->mac_port_mask &&
+ data1->ptc_mask_frag == data2->ptc_mask_frag &&
+ data1->ptc_mask_l2 == data2->ptc_mask_l2 &&
+ data1->ptc_mask_l3 == data2->ptc_mask_l3 &&
+ data1->ptc_mask_l4 == data2->ptc_mask_l4 &&
+ data1->ptc_mask_tunnel == data2->ptc_mask_tunnel &&
+ data1->ptc_mask_l3_tunnel == data2->ptc_mask_l3_tunnel &&
+ data1->ptc_mask_l4_tunnel == data2->ptc_mask_l4_tunnel &&
+ data1->err_mask_ttl_tunnel == data2->err_mask_ttl_tunnel &&
+ data1->err_mask_ttl == data2->err_mask_ttl && data1->ip_prot == data2->ip_prot &&
+ data1->ip_prot_tunnel == data2->ip_prot_tunnel;
+}
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cat_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_CAT;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ int ref = db->cat[i].ref;
+
+ if (ref > 0 && hw_db_inline_cat_compare(data, &db->cat[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cat_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cat[idx.ids].ref = 1;
+ memcpy(&db->cat[idx.ids].data, data, sizeof(struct hw_db_inline_cat_data));
+
+ return idx;
+}
+
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cat[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cat[idx.ids].ref -= 1;
+
+ if (db->cat[idx.ids].ref <= 0) {
+ memset(&db->cat[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cat_data));
+ db->cat[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 0116af015d..38502ac1ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,12 +36,37 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_cat_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
+ HW_DB_IDX_TYPE_CAT,
};
/* Functionality data types */
+struct hw_db_inline_cat_data {
+ uint32_t vlan_mask : 4;
+ uint32_t mac_port_mask : 8;
+ uint32_t ptc_mask_frag : 4;
+ uint32_t ptc_mask_l2 : 7;
+ uint32_t ptc_mask_l3 : 3;
+ uint32_t ptc_mask_l4 : 5;
+ uint32_t padding0 : 1;
+
+ uint32_t ptc_mask_tunnel : 11;
+ uint32_t ptc_mask_l3_tunnel : 3;
+ uint32_t ptc_mask_l4_tunnel : 5;
+ uint32_t err_mask_ttl_tunnel : 2;
+ uint32_t err_mask_ttl : 2;
+ uint32_t padding1 : 9;
+
+ uint8_t ip_prot;
+ uint8_t ip_prot_tunnel;
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -70,6 +95,16 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ };
+ };
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -84,4 +119,16 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+/**/
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data);
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+
+/**/
+
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index af4763ea3f..f7babec3b4 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -20,6 +20,10 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2366,6 +2370,67 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ struct hw_db_inline_action_set_data action_set_data = { 0 };
+ (void)action_set_data;
+
+ if (fd->jump_to_group != UINT32_MAX) {
+ /* Action Set only contains jump */
+ action_set_data.contains_jump = 1;
+ action_set_data.jump = fd->jump_to_group;
+
+ } else {
+ /* Action Set doesn't contain jump */
+ action_set_data.contains_jump = 0;
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = 0,
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
+ &cot_data);
+ fh->db_idxs[fh->db_idx_counter++] = cot_idx.raw;
+ action_set_data.cot = cot_idx;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
+
+ /* Setup CAT */
+ struct hw_db_inline_cat_data cat_data = {
+ .vlan_mask = (0xf << fd->vlans) & 0xf,
+ .mac_port_mask = 1 << fh->port_id,
+ .ptc_mask_frag = fd->fragmentation,
+ .ptc_mask_l2 = fd->l2_prot != -1 ? (1 << fd->l2_prot) : -1,
+ .ptc_mask_l3 = fd->l3_prot != -1 ? (1 << fd->l3_prot) : -1,
+ .ptc_mask_l4 = fd->l4_prot != -1 ? (1 << fd->l4_prot) : -1,
+ .err_mask_ttl = (fd->ttl_sub_enable &&
+ fd->ttl_sub_outer) ? -1 : 0x1,
+ .ptc_mask_tunnel = fd->tunnel_prot !=
+ -1 ? (1 << fd->tunnel_prot) : -1,
+ .ptc_mask_l3_tunnel =
+ fd->tunnel_l3_prot != -1 ? (1 << fd->tunnel_l3_prot) : -1,
+ .ptc_mask_l4_tunnel =
+ fd->tunnel_l4_prot != -1 ? (1 << fd->tunnel_l4_prot) : -1,
+ .err_mask_ttl_tunnel =
+ (fd->ttl_sub_enable && !fd->ttl_sub_outer) ? -1 : 0x1,
+ .ip_prot = fd->ip_prot,
+ .ip_prot_tunnel = fd->tunnel_ip_prot,
+ };
+ struct hw_db_cat_idx cat_idx =
+ hw_db_inline_cat_add(dev->ndev, dev->ndev->hw_db_handle, &cat_data);
+ fh->db_idxs[fh->db_idx_counter++] = cat_idx.raw;
+
+ if (cat_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference CAT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2398,6 +2463,20 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* COT is locked to CFN. Don't set color for CFN 0 */
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+
+ if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ /* Setup filter using matching all packets violating traffic policing parameters */
+ flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+
+ if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE,
+ NT_VIOLATING_MBR_QSL) < 0)
+ goto err_exit0;
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -2432,6 +2511,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PRESET_ALL, 0, 0, 0);
+ hw_mod_cat_cfn_flush(&ndev->be, 0, 1);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+ hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
hw_mod_tpe_reset(&ndev->be);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 27/73] net/ntnic: add SLC LR module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (25 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 26/73] net/ntnic: add cat module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 28/73] net/ntnic: add PDB module Serhii Iliushyk
` (49 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 104 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 19 ++++
.../profile_inline/flow_api_profile_inline.c | 27 +++++
5 files changed, 252 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 87fc16ecb4..2711f44083 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -697,6 +697,8 @@ int hw_mod_slc_lr_alloc(struct flow_api_backend_s *be);
void hw_mod_slc_lr_free(struct flow_api_backend_s *be);
int hw_mod_slc_lr_reset(struct flow_api_backend_s *be);
int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value);
struct pdb_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
index 1d878f3f96..30e5e38690 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
@@ -66,3 +66,103 @@ int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int co
return be->iface->slc_lr_rcp_flush(be->be_dev, &be->slc_lr, start_idx, count);
}
+
+static int hw_mod_slc_lr_rcp_mod(struct flow_api_backend_s *be, enum hw_slc_lr_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 2:
+ switch (field) {
+ case HW_SLC_LR_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->slc_lr.v2.rcp[index], (uint8_t)*value,
+ sizeof(struct hw_mod_slc_lr_v2_s));
+ break;
+
+ case HW_SLC_LR_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value, be->max_categories);
+ break;
+
+ case HW_SLC_LR_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].head_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].tail_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_PCAP:
+ GET_SET(be->slc_lr.v2.rcp[index].pcap, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_slc_lr_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4d5bcbef49..35edd2d1a3 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -19,7 +19,13 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_slc_lr {
+ struct hw_db_inline_slc_lr_data data;
+ int ref;
+ } *slc_lr;
+
uint32_t nb_cot;
+ uint32_t nb_slc_lr;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -54,6 +60,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_slc_lr = ndev->be.max_categories;
+ db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
+
+ if (db->slc_lr == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -71,6 +85,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->slc_lr);
free(db->cat);
free(db->cfn);
@@ -94,6 +109,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_SLC_LR:
+ hw_db_inline_slc_lr_deref(ndev, db_handle,
+ *(struct hw_db_slc_lr_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -234,6 +254,90 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
}
}
+/******************************************************************************/
+/* SLC_LR */
+/******************************************************************************/
+
+static int hw_db_inline_slc_lr_compare(const struct hw_db_inline_slc_lr_data *data1,
+ const struct hw_db_inline_slc_lr_data *data2)
+{
+ if (!data1->head_slice_en)
+ return data1->head_slice_en == data2->head_slice_en;
+
+ return data1->head_slice_en == data2->head_slice_en &&
+ data1->head_slice_dyn == data2->head_slice_dyn &&
+ data1->head_slice_ofs == data2->head_slice_ofs;
+}
+
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_slc_lr_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_SLC_LR;
+
+ for (uint32_t i = 1; i < db->nb_slc_lr; ++i) {
+ int ref = db->slc_lr[i].ref;
+
+ if (ref > 0 && hw_db_inline_slc_lr_compare(data, &db->slc_lr[i].data)) {
+ idx.ids = i;
+ hw_db_inline_slc_lr_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->slc_lr[idx.ids].ref = 1;
+ memcpy(&db->slc_lr[idx.ids].data, data, sizeof(struct hw_db_inline_slc_lr_data));
+
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_SLC_EN, idx.ids, data->head_slice_en);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_DYN, idx.ids, data->head_slice_dyn);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_OFS, idx.ids, data->head_slice_ofs);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->slc_lr[idx.ids].ref += 1;
+}
+
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->slc_lr[idx.ids].ref -= 1;
+
+ if (db->slc_lr[idx.ids].ref <= 0) {
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->slc_lr[idx.ids].data, 0x0, sizeof(struct hw_db_inline_slc_lr_data));
+ db->slc_lr[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 38502ac1ec..ef63336b1c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -40,10 +40,15 @@ struct hw_db_cat_idx {
HW_DB_IDX;
};
+struct hw_db_slc_lr_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_SLC_LR,
};
/* Functionality data types */
@@ -89,6 +94,13 @@ struct hw_db_inline_cot_data {
uint32_t padding : 24;
};
+struct hw_db_inline_slc_lr_data {
+ uint32_t head_slice_en : 1;
+ uint32_t head_slice_dyn : 5;
+ uint32_t head_slice_ofs : 8;
+ uint32_t padding : 18;
+};
+
struct hw_db_inline_hsh_data {
uint32_t func;
uint64_t hash_mask;
@@ -119,6 +131,13 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data);
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f7babec3b4..c2a0273aa2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2302,6 +2302,26 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
(void)hsh_data;
(void)error;
+ /* Setup SLC LR */
+ struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
+
+ if (fd->header_strip_end_dyn != 0 || fd->header_strip_end_ofs != 0) {
+ struct hw_db_inline_slc_lr_data slc_lr_data = {
+ .head_slice_en = 1,
+ .head_slice_dyn = fd->header_strip_end_dyn,
+ .head_slice_ofs = fd->header_strip_end_ofs,
+ };
+ slc_lr_idx =
+ hw_db_inline_slc_lr_add(dev->ndev, dev->ndev->hw_db_handle, &slc_lr_data);
+ local_idxs[(*local_idx_counter)++] = slc_lr_idx.raw;
+
+ if (slc_lr_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference SLC LR resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -2469,6 +2489,9 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* SLC LR index 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2517,6 +2540,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
+
hw_mod_tpe_reset(&ndev->be);
flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 28/73] net/ntnic: add PDB module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (26 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 29/73] net/ntnic: add QSL module Serhii Iliushyk
` (48 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Packet Description Builder module creates packet meta-data
for example virtio-net headers.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 17 +++
3 files changed, 164 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 2711f44083..7f1449d8ee 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -740,6 +740,9 @@ int hw_mod_pdb_alloc(struct flow_api_backend_s *be);
void hw_mod_pdb_free(struct flow_api_backend_s *be);
int hw_mod_pdb_reset(struct flow_api_backend_s *be);
int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value);
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be);
struct tpe_func_s {
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
index c3facacb08..59285405ba 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
@@ -85,6 +85,150 @@ int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->pdb_rcp_flush(be->be_dev, &be->pdb, start_idx, count);
}
+static int hw_mod_pdb_rcp_mod(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 9:
+ switch (field) {
+ case HW_PDB_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->pdb.v9.rcp[index], (uint8_t)*value,
+ sizeof(struct pdb_v9_rcp_s));
+ break;
+
+ case HW_PDB_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value,
+ be->pdb.nb_pdb_rcp_categories);
+ break;
+
+ case HW_PDB_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value);
+ break;
+
+ case HW_PDB_RCP_DESCRIPTOR:
+ GET_SET(be->pdb.v9.rcp[index].descriptor, value);
+ break;
+
+ case HW_PDB_RCP_DESC_LEN:
+ GET_SET(be->pdb.v9.rcp[index].desc_len, value);
+ break;
+
+ case HW_PDB_RCP_TX_PORT:
+ GET_SET(be->pdb.v9.rcp[index].tx_port, value);
+ break;
+
+ case HW_PDB_RCP_TX_IGNORE:
+ GET_SET(be->pdb.v9.rcp[index].tx_ignore, value);
+ break;
+
+ case HW_PDB_RCP_TX_NOW:
+ GET_SET(be->pdb.v9.rcp[index].tx_now, value);
+ break;
+
+ case HW_PDB_RCP_CRC_OVERWRITE:
+ GET_SET(be->pdb.v9.rcp[index].crc_overwrite, value);
+ break;
+
+ case HW_PDB_RCP_ALIGN:
+ GET_SET(be->pdb.v9.rcp[index].align, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs0_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs0_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs1_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs1_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs2_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs2_rel, value);
+ break;
+
+ case HW_PDB_RCP_IP_PROT_TNL:
+ GET_SET(be->pdb.v9.rcp[index].ip_prot_tnl, value);
+ break;
+
+ case HW_PDB_RCP_PPC_HSH:
+ GET_SET(be->pdb.v9.rcp[index].ppc_hsh, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_EN:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_en, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_BIT:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_bit, value);
+ break;
+
+ case HW_PDB_RCP_PCAP_KEEP_FCS:
+ GET_SET(be->pdb.v9.rcp[index].pcap_keep_fcs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 9 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_pdb_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be)
{
return be->iface->pdb_config_flush(be->be_dev, &be->pdb);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c2a0273aa2..fe5f82b3bd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2492,6 +2492,19 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ /* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
+ */
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESCRIPTOR, 0, 7) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESC_LEN, 0, 6) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2549,6 +2562,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+ hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_PRESET_ALL, 0, 0);
+ hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 29/73] net/ntnic: add QSL module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (27 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 28/73] net/ntnic: add PDB module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 30/73] net/ntnic: add KM module Serhii Iliushyk
` (47 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Queue Selector module directs packets to a given destination
which includes host queues, physical ports, exceptions paths, and discard.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/hw_mod_backend.h | 8 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 65 ++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 94 ++++++++
7 files changed, 594 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 7f031ccda8..edffd0a57a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -184,8 +184,11 @@ extern const char *dbg_res_descr[];
int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
uint32_t alignment);
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment);
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
#endif
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7f1449d8ee..6fa2a3d94f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -666,8 +666,16 @@ int hw_mod_qsl_alloc(struct flow_api_backend_s *be);
void hw_mod_qsl_free(struct flow_api_backend_s *be);
int hw_mod_qsl_reset(struct flow_api_backend_s *be);
int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value);
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_qsl_unmq_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
uint32_t value);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a366f17e08..4303a2c759 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -106,11 +106,52 @@ int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return -1;
}
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment)
+{
+ unsigned int idx_offs;
+
+ for (unsigned int res_idx = 0; res_idx < ndev->res[res_type].resource_count - (num - 1);
+ res_idx += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, res_idx)) {
+ for (idx_offs = 1; idx_offs < num; idx_offs++)
+ if (flow_nic_is_resource_used(ndev, res_type, res_idx + idx_offs))
+ break;
+
+ if (idx_offs < num)
+ continue;
+
+ /* found a contiguous number of "num" res_type elements - allocate them */
+ for (idx_offs = 0; idx_offs < num; idx_offs++) {
+ flow_nic_mark_resource_used(ndev, res_type, res_idx + idx_offs);
+ ndev->res[res_type].ref[res_idx + idx_offs] = 1;
+ }
+
+ return res_idx;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
}
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
+{
+ NT_LOG(DBG, FILTER, "Reference resource %s idx %i (before ref cnt %i)",
+ dbg_res_descr[res_type], index, ndev->res[res_type].ref[index]);
+ assert(flow_nic_is_resource_used(ndev, res_type, index));
+
+ if (ndev->res[res_type].ref[index] == (uint32_t)-1)
+ return -1;
+
+ ndev->res[res_type].ref[index]++;
+ return 0;
+}
+
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
{
NT_LOG(DBG, FILTER, "De-reference resource %s idx %i (before ref cnt %i)",
@@ -358,6 +399,18 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 0);
hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1);
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (int i = 0; i < eth_dev->num_queues; ++i) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value & ~(1U << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
#ifdef FLOW_DEBUG
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
@@ -590,6 +643,18 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->rss_target_id = -1;
+ if (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (i = 0; i < eth_dev->num_queues; i++) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value | (1 << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
*rss_target_id = eth_dev->rss_target_id;
nic_insert_eth_port_dev(ndev, eth_dev);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
index 93b37d595e..70fe97a298 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
@@ -104,6 +104,114 @@ int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_rcp_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_rcp_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.rcp[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_rcp_s));
+ break;
+
+ case HW_QSL_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value,
+ be->qsl.nb_rcp_categories);
+ break;
+
+ case HW_QSL_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value);
+ break;
+
+ case HW_QSL_RCP_DISCARD:
+ GET_SET(be->qsl.v7.rcp[index].discard, value);
+ break;
+
+ case HW_QSL_RCP_DROP:
+ GET_SET(be->qsl.v7.rcp[index].drop, value);
+ break;
+
+ case HW_QSL_RCP_TBL_LO:
+ GET_SET(be->qsl.v7.rcp[index].tbl_lo, value);
+ break;
+
+ case HW_QSL_RCP_TBL_HI:
+ GET_SET(be->qsl.v7.rcp[index].tbl_hi, value);
+ break;
+
+ case HW_QSL_RCP_TBL_IDX:
+ GET_SET(be->qsl.v7.rcp[index].tbl_idx, value);
+ break;
+
+ case HW_QSL_RCP_TBL_MSK:
+ GET_SET(be->qsl.v7.rcp[index].tbl_msk, value);
+ break;
+
+ case HW_QSL_RCP_LR:
+ GET_SET(be->qsl.v7.rcp[index].lr, value);
+ break;
+
+ case HW_QSL_RCP_TSA:
+ GET_SET(be->qsl.v7.rcp[index].tsa, value);
+ break;
+
+ case HW_QSL_RCP_VLI:
+ GET_SET(be->qsl.v7.rcp[index].vli, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -117,6 +225,73 @@ int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qst_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qst_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_qst_entries) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.qst[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_qst_s));
+ break;
+
+ case HW_QSL_QST_QUEUE:
+ GET_SET(be->qsl.v7.qst[index].queue, value);
+ break;
+
+ case HW_QSL_QST_EN:
+ GET_SET(be->qsl.v7.qst[index].en, value);
+ break;
+
+ case HW_QSL_QST_TX_PORT:
+ GET_SET(be->qsl.v7.qst[index].tx_port, value);
+ break;
+
+ case HW_QSL_QST_LRE:
+ GET_SET(be->qsl.v7.qst[index].lre, value);
+ break;
+
+ case HW_QSL_QST_TCI:
+ GET_SET(be->qsl.v7.qst[index].tci, value);
+ break;
+
+ case HW_QSL_QST_VEN:
+ GET_SET(be->qsl.v7.qst[index].ven, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -130,6 +305,49 @@ int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qen_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qen_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= QSL_QEN_ENTRIES) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QEN_EN:
+ GET_SET(be->qsl.v7.qen[index].en, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, &value, 0);
+}
+
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, value, 1);
+}
+
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 35edd2d1a3..464c2fa81c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -19,12 +19,18 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_qsl {
+ struct hw_db_inline_qsl_data data;
+ int qst_idx;
+ } *qsl;
+
struct hw_db_inline_resource_db_slc_lr {
struct hw_db_inline_slc_lr_data data;
int ref;
} *slc_lr;
uint32_t nb_cot;
+ uint32_t nb_qsl;
uint32_t nb_slc_lr;
/* Items */
@@ -60,6 +66,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_qsl = ndev->be.qsl.nb_rcp_categories;
+ db->qsl = calloc(db->nb_qsl, sizeof(struct hw_db_inline_resource_db_qsl));
+
+ if (db->qsl == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_slc_lr = ndev->be.max_categories;
db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
@@ -85,6 +99,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->qsl);
free(db->slc_lr);
free(db->cat);
@@ -109,6 +124,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_QSL:
+ hw_db_inline_qsl_deref(ndev, db_handle, *(struct hw_db_qsl_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_SLC_LR:
hw_db_inline_slc_lr_deref(ndev, db_handle,
*(struct hw_db_slc_lr_idx *)&idxs[i]);
@@ -144,6 +163,13 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
+ /* QSL for traffic policing */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_hw_id, 0x3) < 0)
+ return -1;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, qsl_hw_id, 1) < 0)
+ return -1;
+
/* Select and enable QSL recipe */
if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
return -1;
@@ -254,6 +280,175 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
}
}
+/******************************************************************************/
+/* QSL */
+/******************************************************************************/
+
+/* Calculate queue mask for QSL TBL_MSK for given number of queues.
+ * NOTE: If number of queues is not power of two, then queue mask will be created
+ * for nearest smaller power of two.
+ */
+static uint32_t queue_mask(uint32_t nr_queues)
+{
+ nr_queues |= nr_queues >> 1;
+ nr_queues |= nr_queues >> 2;
+ nr_queues |= nr_queues >> 4;
+ nr_queues |= nr_queues >> 8;
+ nr_queues |= nr_queues >> 16;
+ return nr_queues >> 1;
+}
+
+static int hw_db_inline_qsl_compare(const struct hw_db_inline_qsl_data *data1,
+ const struct hw_db_inline_qsl_data *data2)
+{
+ if (data1->discard != data2->discard || data1->drop != data2->drop ||
+ data1->table_size != data2->table_size || data1->retransmit != data2->retransmit) {
+ return 0;
+ }
+
+ for (int i = 0; i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ if (data1->table[i].queue != data2->table[i].queue ||
+ data1->table[i].queue_en != data2->table[i].queue_en ||
+ data1->table[i].tx_port != data2->table[i].tx_port ||
+ data1->table[i].tx_port_en != data2->table[i].tx_port_en) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_qsl_idx qsl_idx = { .raw = 0 };
+ uint32_t qst_idx = 0;
+ int res;
+
+ qsl_idx.type = HW_DB_IDX_TYPE_QSL;
+
+ if (data->discard) {
+ qsl_idx.ids = 0;
+ return qsl_idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_qsl; ++i) {
+ if (hw_db_inline_qsl_compare(data, &db->qsl[i].data)) {
+ qsl_idx.ids = i;
+ hw_db_inline_qsl_ref(ndev, db, qsl_idx);
+ return qsl_idx;
+ }
+ }
+
+ res = flow_nic_alloc_resource(ndev, RES_QSL_RCP, 1);
+
+ if (res < 0) {
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qsl_idx.ids = res & 0xff;
+
+ if (data->table_size > 0) {
+ res = flow_nic_alloc_resource_config(ndev, RES_QSL_QST, data->table_size, 1);
+
+ if (res < 0) {
+ flow_nic_deref_resource(ndev, RES_QSL_RCP, qsl_idx.ids);
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qst_idx = (uint32_t)res;
+ }
+
+ memcpy(&db->qsl[qsl_idx.ids].data, data, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[qsl_idx.ids].qst_idx = qst_idx;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, qsl_idx.ids, 0x0);
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, qsl_idx.ids, data->discard);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_idx.ids, data->drop * 0x3);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_LR, qsl_idx.ids, data->retransmit * 0x3);
+
+ if (data->table_size == 0) {
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, 0x0);
+
+ } else {
+ const uint32_t table_start = qst_idx;
+ const uint32_t table_end = table_start + data->table_size - 1;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, table_end);
+
+ /* Toeplitz hash function uses TBL_IDX and TBL_MSK. */
+ uint32_t msk = queue_mask(table_end - table_start + 1);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, msk);
+
+ for (uint32_t i = 0; i < data->table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, table_start + i, 0x0);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_TX_PORT, table_start + i,
+ data->table[i].tx_port);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_LRE, table_start + i,
+ data->table[i].tx_port_en);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_QUEUE, table_start + i,
+ data->table[i].queue);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_EN, table_start + i,
+ data->table[i].queue_en);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, data->table_size);
+ }
+
+ hw_mod_qsl_rcp_flush(&ndev->be, qsl_idx.ids, 1);
+
+ return qsl_idx;
+}
+
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ (void)db_handle;
+
+ if (!idx.error && idx.ids != 0)
+ flow_nic_ref_resource(ndev, RES_QSL_RCP, idx.ids);
+}
+
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error || idx.ids == 0)
+ return;
+
+ if (flow_nic_deref_resource(ndev, RES_QSL_RCP, idx.ids) == 0) {
+ const int table_size = (int)db->qsl[idx.ids].data.table_size;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_qsl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ if (table_size > 0) {
+ const int table_start = db->qsl[idx.ids].qst_idx;
+
+ for (int i = 0; i < (int)table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL,
+ table_start + i, 0x0);
+ flow_nic_free_resource(ndev, RES_QSL_QST, table_start + i);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, table_size);
+ }
+
+ memset(&db->qsl[idx.ids].data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[idx.ids].qst_idx = 0;
+ }
+}
+
/******************************************************************************/
/* SLC_LR */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index ef63336b1c..d0435acaef 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,6 +36,10 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_qsl_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cat_idx {
HW_DB_IDX;
};
@@ -48,6 +52,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
};
@@ -113,6 +118,7 @@ struct hw_db_inline_action_set_data {
int jump;
struct {
struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
};
};
};
@@ -131,6 +137,11 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data);
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+
struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_slc_lr_data *data);
void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index fe5f82b3bd..9b504217d2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2278,6 +2278,52 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
+
+static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_data *qsl_data,
+ uint32_t num_dest_port, uint32_t num_queues)
+{
+ memset(qsl_data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+
+ if (fd->dst_num_avail <= 0) {
+ qsl_data->drop = 1;
+
+ } else {
+ assert(fd->dst_num_avail < HW_DB_INLINE_MAX_QST_PER_QSL);
+
+ uint32_t ports[fd->dst_num_avail];
+ uint32_t queues[fd->dst_num_avail];
+
+ uint32_t port_index = 0;
+ uint32_t queue_index = 0;
+ uint32_t max = num_dest_port > num_queues ? num_dest_port : num_queues;
+
+ memset(ports, 0, fd->dst_num_avail);
+ memset(queues, 0, fd->dst_num_avail);
+
+ qsl_data->table_size = max;
+ qsl_data->retransmit = num_dest_port > 0 ? 1 : 0;
+
+ for (int i = 0; i < fd->dst_num_avail; ++i)
+ if (fd->dst_id[i].type == PORT_PHY)
+ ports[port_index++] = fd->dst_id[i].id;
+
+ else if (fd->dst_id[i].type == PORT_VIRT)
+ queues[queue_index++] = fd->dst_id[i].id;
+
+ for (uint32_t i = 0; i < max; ++i) {
+ if (num_dest_port > 0) {
+ qsl_data->table[i].tx_port = ports[i % num_dest_port];
+ qsl_data->table[i].tx_port_en = 1;
+ }
+
+ if (num_queues > 0) {
+ qsl_data->table[i].queue = queues[i % num_queues];
+ qsl_data->table[i].queue_en = 1;
+ }
+ }
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
@@ -2302,6 +2348,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
(void)hsh_data;
(void)error;
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
+ local_idxs[(*local_idx_counter)++] = qsl_idx.raw;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2348,6 +2405,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
fh->caller_id = caller_id;
struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
@@ -2418,6 +2476,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle,
+ &qsl_data);
+ fh->db_idxs[fh->db_idx_counter++] = qsl_idx.raw;
+ action_set_data.qsl = qsl_idx;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2489,6 +2560,24 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* Initialize QSL with unmatched recipe index 0 - discard */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, 0, 0x1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, 0);
+
+ /* Initialize QST with default index 0 */
+ if (hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, 0, 0x0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_qst_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
+
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
@@ -2507,6 +2596,7 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
NT_FLM_VIOLATING_MBR_FLOW_TYPE,
@@ -2553,6 +2643,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, 0, 0);
+ hw_mod_qsl_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_QSL_RCP, 0);
+
hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 30/73] net/ntnic: add KM module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (28 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 29/73] net/ntnic: add QSL module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 31/73] net/ntnic: add hash API Serhii Iliushyk
` (46 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Key Matcher module checks the values of individual fields of a packet.
It supports both exact match which is implemented with a CAM,
and wildcards which is implemented with a TCAM.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 110 +-
drivers/net/ntnic/include/hw_mod_backend.h | 64 +-
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1065 +++++++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++
.../profile_inline/flow_api_hw_db_inline.h | 38 +
.../profile_inline/flow_api_profile_inline.c | 168 ++-
7 files changed, 2024 insertions(+), 35 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b1d39b919b..a0f02f4e8a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -52,34 +52,32 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_WORD_NUM 24
+#define MAX_BANKS 6
+
+#define MAX_TCAM_START_OFFSETS 4
+
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
/*
- * Tunnel encapsulation header definition
+ * 128 128 32 32 32
+ * Have | QW0 || QW4 || SW8 || SW9 | SWX in FPGA
+ *
+ * Each word may start at any offset, though
+ * they are combined in chronological order, with all enabled to
+ * build the extracted match data, thus that is how the match key
+ * must be build
*/
-#define MAX_TUN_HDR_SIZE 128
-struct tunnel_header_s {
- union {
- uint8_t hdr8[MAX_TUN_HDR_SIZE];
- uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
- } d;
- uint32_t user_port_id;
- uint8_t len;
-
- uint8_t nb_vlans;
-
- uint8_t ip_version; /* 4: v4, 6: v6 */
- uint16_t ip_csum_precalc;
-
- uint8_t new_outer;
- uint8_t l2_len;
- uint8_t l3_len;
- uint8_t l4_len;
+enum extractor_e {
+ KM_USE_EXTRACTOR_UNDEF,
+ KM_USE_EXTRACTOR_QWORD,
+ KM_USE_EXTRACTOR_SWORD,
};
struct match_elem_s {
+ enum extractor_e extr;
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
uint32_t e_mask[4];
@@ -89,16 +87,76 @@ struct match_elem_s {
uint32_t word_len;
};
+enum cam_tech_use_e {
+ KM_CAM,
+ KM_TCAM,
+ KM_SYNERGY
+};
+
struct km_flow_def_s {
struct flow_api_backend_s *be;
+ /* For keeping track of identical entries */
+ struct km_flow_def_s *reference;
+ struct km_flow_def_s *root;
+
/* For collect flow elements and sorting */
struct match_elem_s match[MAX_MATCH_FIELDS];
+ struct match_elem_s *match_map[MAX_MATCH_FIELDS];
int num_ftype_elem;
+ /* Finally formatted CAM/TCAM entry */
+ enum cam_tech_use_e target;
+ uint32_t entry_word[MAX_WORD_NUM];
+ uint32_t entry_mask[MAX_WORD_NUM];
+ int key_word_size;
+
+ /* TCAM calculated possible bank start offsets */
+ int start_offsets[MAX_TCAM_START_OFFSETS];
+ int num_start_offsets;
+
/* Flow information */
/* HW input port ID needed for compare. In port must be identical on flow types */
uint32_t port_id;
+ uint32_t info; /* used for color (actions) */
+ int info_set;
+ int flow_type; /* 0 is illegal and used as unset */
+ int flushed_to_target; /* if this km entry has been finally programmed into NIC hw */
+
+ /* CAM specific bank management */
+ int cam_paired;
+ int record_indexes[MAX_BANKS];
+ int bank_used;
+ uint32_t *cuckoo_moves; /* for CAM statistics only */
+ struct cam_distrib_s *cam_dist;
+
+ /* TCAM specific bank management */
+ struct tcam_distrib_s *tcam_dist;
+ int tcam_start_bank;
+ int tcam_record;
+};
+
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
};
enum flow_port_type_e {
@@ -247,11 +305,25 @@ struct flow_handle {
};
};
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
uint32_t word_len, enum frame_offs_e start, int8_t offset);
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id);
+/*
+ * Compares 2 KM key definitions after first collect validate and optimization.
+ * km is compared against an existing km1.
+ * if identical, km1 flow_type is returned
+ */
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1);
+
+int km_rcp_set(struct km_flow_def_s *km, int index);
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color);
+int km_clear_data_match_entry(struct km_flow_def_s *km);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6fa2a3d94f..26903f2183 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -132,6 +132,22 @@ static inline int is_non_zero(const void *addr, size_t n)
return 0;
}
+/* Sideband info bit indicator */
+#define SWX_INFO (1 << 6)
+
+enum km_flm_if_select_e {
+ KM_FLM_IF_FIRST = 0,
+ KM_FLM_IF_SECOND = 1
+};
+
+#define FIELD_START_INDEX 100
+
+#define COMMON_FUNC_INFO_S \
+ int ver; \
+ void *base; \
+ unsigned int alloced_size; \
+ int debug
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
@@ -141,22 +157,39 @@ enum frame_offs_e {
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ SB_VNI = SWX_INFO | 1,
+ SB_MAC_PORT = SWX_INFO | 2,
+ SB_KCC_ID = SWX_INFO | 3
};
-/* Sideband info bit indicator */
+enum {
+ QW0_SEL_EXCLUDE = 0,
+ QW0_SEL_FIRST32 = 1,
+ QW0_SEL_FIRST64 = 3,
+ QW0_SEL_ALL128 = 4,
+};
-enum km_flm_if_select_e {
- KM_FLM_IF_FIRST = 0,
- KM_FLM_IF_SECOND = 1
+enum {
+ QW4_SEL_EXCLUDE = 0,
+ QW4_SEL_FIRST32 = 1,
+ QW4_SEL_FIRST64 = 2,
+ QW4_SEL_ALL128 = 3,
};
-#define FIELD_START_INDEX 100
+enum {
+ DW8_SEL_EXCLUDE = 0,
+ DW8_SEL_FIRST32 = 3,
+};
-#define COMMON_FUNC_INFO_S \
- int ver; \
- void *base; \
- unsigned int alloced_size; \
- int debug
+enum {
+ DW10_SEL_EXCLUDE = 0,
+ DW10_SEL_FIRST32 = 2,
+};
+
+enum {
+ SWX_SEL_EXCLUDE = 0,
+ SWX_SEL_ALL32 = 1,
+};
enum {
PROT_OTHER = 0,
@@ -440,13 +473,24 @@ int hw_mod_km_alloc(struct flow_api_backend_s *be);
void hw_mod_km_free(struct flow_api_backend_s *be);
int hw_mod_km_reset(struct flow_api_backend_s *be);
int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value);
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value);
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count);
int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
int byte_val, uint32_t *value_set);
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set);
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 237e9f7b4e..30d6ea728e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -10,8 +10,34 @@
#include "flow_api_engine.h"
#include "nt_util.h"
+#define MAX_QWORDS 2
+#define MAX_SWORDS 2
+
+#define CUCKOO_MOVE_MAX_DEPTH 8
+
#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+#define CAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_cam_records + (rec))
+#define CAM_KM_DIST_IDX(bnk) \
+ ({ \
+ int _temp_bnk = (bnk); \
+ CAM_DIST_IDX(_temp_bnk, km->record_indexes[_temp_bnk]); \
+ })
+
+#define TCAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_tcam_bank_width + (rec))
+
+#define CAM_ENTRIES \
+ (km->be->km.nb_cam_banks * km->be->km.nb_cam_records * sizeof(struct cam_distrib_s))
+#define TCAM_ENTRIES \
+ (km->be->km.nb_tcam_bank_width * km->be->km.nb_tcam_banks * sizeof(struct tcam_distrib_s))
+
+/*
+ * CAM structures and defines
+ */
+struct cam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
static const struct cam_match_masks_s {
uint32_t word_len;
uint32_t key_mask[4];
@@ -36,6 +62,25 @@ static const struct cam_match_masks_s {
{ 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
};
+static int cam_addr_reserved_stack[CUCKOO_MOVE_MAX_DEPTH];
+
+/*
+ * TCAM structures and defines
+ */
+struct tcam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
+static int tcam_find_mapping(struct km_flow_def_s *km);
+
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
+{
+ km->cam_dist = (struct cam_distrib_s *)*handle;
+ km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
+ km->tcam_dist =
+ (struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+}
+
void km_free_ndev_resource_management(void **handle)
{
if (*handle) {
@@ -98,3 +143,1023 @@ int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_m
km->num_ftype_elem++;
return 0;
}
+
+static int get_word(struct km_flow_def_s *km, uint32_t size, int marked[])
+{
+ for (int i = 0; i < km->num_ftype_elem; i++)
+ if (!marked[i] && !(km->match[i].extr_start_offs_id & SWX_INFO) &&
+ km->match[i].word_len == size)
+ return i;
+
+ return -1;
+}
+
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id)
+{
+ /*
+ * Create combined extractor mappings
+ * if key fields may be changed to cover un-mappable otherwise?
+ * split into cam and tcam and use synergy mode when available
+ */
+ int match_marked[MAX_MATCH_FIELDS];
+ int idx = 0;
+ int next = 0;
+ int m_idx;
+ int size;
+
+ memset(match_marked, 0, sizeof(match_marked));
+
+ /* build QWords */
+ for (int qwords = 0; qwords < MAX_QWORDS; qwords++) {
+ size = 4;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 2;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 1;
+ m_idx = get_word(km, 1, match_marked);
+ }
+ }
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_QWORD;
+
+ /* build final entry words and mask array */
+ for (int i = 0; i < size; i++) {
+ km->entry_word[idx + i] = km->match[m_idx].e_word[i];
+ km->entry_mask[idx + i] = km->match[m_idx].e_mask[i];
+ }
+
+ idx += size;
+ next++;
+ }
+
+ m_idx = get_word(km, 4, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more QWords */
+ return -1;
+ }
+
+ /*
+ * On km v6+ we have DWORDs here instead. However, we only use them as SWORDs for now
+ * No match would be able to exploit these as DWORDs because of maximum length of 12 words
+ * in CAM The last 2 words are taken by KCC-ID/SWX and Color. You could have one or none
+ * QWORDs where then both these DWORDs were possible in 10 words, but we don't have such
+ * use case built in yet
+ */
+ /* build SWords */
+ for (int swords = 0; swords < MAX_SWORDS; swords++) {
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_SWORD;
+
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[m_idx].e_word[0];
+ km->entry_mask[idx] = km->match[m_idx].e_mask[0];
+ idx++;
+ next++;
+ }
+
+ /*
+ * Make sure we took them all
+ */
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more SWords */
+ return -1;
+ }
+
+ /*
+ * Handle SWX words specially
+ */
+ int swx_found = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id & SWX_INFO) {
+ km->match_map[next] = &km->match[i];
+ km->match[i].extr = KM_USE_EXTRACTOR_SWORD;
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[i].e_word[0];
+ km->entry_mask[idx] = km->match[i].e_mask[0];
+ idx++;
+ next++;
+ swx_found = 1;
+ }
+ }
+
+ assert(next == km->num_ftype_elem);
+
+ km->key_word_size = idx;
+ km->port_id = port_id;
+
+ km->target = KM_CAM;
+
+ /*
+ * Finally decide if we want to put this match->action into the TCAM
+ * When SWX word used we need to put it into CAM always, no matter what mask pattern
+ * Later, when synergy mode is applied, we can do a split
+ */
+ if (!swx_found && km->key_word_size <= 6) {
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match_map[i]->masked_for_tcam) {
+ /* At least one */
+ km->target = KM_TCAM;
+ }
+ }
+ }
+
+ NT_LOG(DBG, FILTER, "This flow goes into %s", (km->target == KM_TCAM) ? "TCAM" : "CAM");
+
+ if (km->target == KM_TCAM) {
+ if (km->key_word_size > 10) {
+ /* do not support SWX in TCAM */
+ return -1;
+ }
+
+ /*
+ * adjust for unsupported key word size in TCAM
+ */
+ if ((km->key_word_size == 5 || km->key_word_size == 7 || km->key_word_size == 9)) {
+ km->entry_mask[km->key_word_size] = 0;
+ km->key_word_size++;
+ }
+
+ /*
+ * 1. the fact that the length of a key cannot change among the same used banks
+ *
+ * calculate possible start indexes
+ * unfortunately restrictions in TCAM lookup
+ * makes it hard to handle key lengths larger than 6
+ * when other sizes should be possible too
+ */
+ switch (km->key_word_size) {
+ case 1:
+ for (int i = 0; i < 4; i++)
+ km->start_offsets[0] = 8 + i;
+
+ km->num_start_offsets = 4;
+ break;
+
+ case 2:
+ km->start_offsets[0] = 6;
+ km->num_start_offsets = 1;
+ break;
+
+ case 3:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 4:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 6:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Final Key word size too large: %i",
+ km->key_word_size);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1)
+{
+ if (km->target != km1->target || km->num_ftype_elem != km1->num_ftype_elem ||
+ km->key_word_size != km1->key_word_size || km->info_set != km1->info_set)
+ return 0;
+
+ /*
+ * before KCC-CAM:
+ * if port is added to match, then we can have different ports in CAT
+ * that reuses this flow type
+ */
+ int port_match_included = 0, kcc_swx_used = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id == SB_MAC_PORT) {
+ port_match_included = 1;
+ break;
+ }
+
+ if (km->match_map[i]->extr_start_offs_id == SB_KCC_ID) {
+ kcc_swx_used = 1;
+ break;
+ }
+ }
+
+ /*
+ * If not using KCC and if port match is not included in CAM,
+ * we need to have same port_id to reuse
+ */
+ if (!kcc_swx_used && !port_match_included && km->port_id != km1->port_id)
+ return 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ /* using same extractor types in same sequence */
+ if (km->match_map[i]->extr_start_offs_id !=
+ km1->match_map[i]->extr_start_offs_id ||
+ km->match_map[i]->rel_offs != km1->match_map[i]->rel_offs ||
+ km->match_map[i]->extr != km1->match_map[i]->extr ||
+ km->match_map[i]->word_len != km1->match_map[i]->word_len) {
+ return 0;
+ }
+ }
+
+ if (km->target == KM_CAM) {
+ /* in CAM must exactly match on all masks */
+ for (int i = 0; i < km->key_word_size; i++)
+ if (km->entry_mask[i] != km1->entry_mask[i])
+ return 0;
+
+ /* Would be set later if not reusing from km1 */
+ km->cam_paired = km1->cam_paired;
+
+ } else if (km->target == KM_TCAM) {
+ /*
+ * If TCAM, we must make sure Recipe Key Mask does not
+ * mask out enable bits in masks
+ * Note: it is important that km1 is the original creator
+ * of the KM Recipe, since it contains its true masks
+ */
+ for (int i = 0; i < km->key_word_size; i++)
+ if ((km->entry_mask[i] & km1->entry_mask[i]) != km->entry_mask[i])
+ return 0;
+
+ km->tcam_start_bank = km1->tcam_start_bank;
+ km->tcam_record = -1; /* needs to be found later */
+
+ } else {
+ NT_LOG(DBG, FILTER, "ERROR - KM target not defined or supported");
+ return 0;
+ }
+
+ /*
+ * Check for a flow clash. If already programmed return with -1
+ */
+ int double_match = 1;
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ if ((km->entry_word[i] & km->entry_mask[i]) !=
+ (km1->entry_word[i] & km1->entry_mask[i])) {
+ double_match = 0;
+ break;
+ }
+ }
+
+ if (double_match)
+ return -1;
+
+ /*
+ * Note that TCAM and CAM may reuse same RCP and flow type
+ * when this happens, CAM entry wins on overlap
+ */
+
+ /* Use same KM Recipe and same flow type - return flow type */
+ return km1->flow_type;
+}
+
+int km_rcp_set(struct km_flow_def_s *km, int index)
+{
+ int qw = 0;
+ int sw = 0;
+ int swx = 0;
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PRESET_ALL, index, 0, 0);
+
+ /* set extractor words, offs, contrib */
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ switch (km->match_map[i]->extr) {
+ case KM_USE_EXTRACTOR_SWORD:
+ if (km->match_map[i]->extr_start_offs_id & SWX_INFO) {
+ if (km->target == KM_CAM && swx == 0) {
+ /* SWX */
+ if (km->match_map[i]->extr_start_offs_id == SB_VNI) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - VNI");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_MAC_PORT) {
+ NT_LOG(DBG, FILTER,
+ "Set KM SWX sel A - PTC + MAC");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_KCC_ID) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - KCC ID");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ return -1;
+ }
+
+ swx++;
+
+ } else {
+ if (sw == 0) {
+ /* DW8 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_SEL_A, index, 0,
+ DW8_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW8 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else if (sw == 1) {
+ /* DW10 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_SEL_A, index, 0,
+ DW10_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW10 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else {
+ return -1;
+ }
+
+ sw++;
+ }
+
+ break;
+
+ case KM_USE_EXTRACTOR_QWORD:
+ if (qw == 0) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW0 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else if (qw == 1) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW4 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else {
+ return -1;
+ }
+
+ qw++;
+ break;
+
+ default:
+ return -1;
+ }
+ }
+
+ /* set mask A */
+ for (int i = 0; i < km->key_word_size; i++) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_MASK_A, index,
+ (km->be->km.nb_km_rcp_mask_a_word_size - 1) - i,
+ km->entry_mask[i]);
+ NT_LOG(DBG, FILTER, "Set KM mask A: %08x", km->entry_mask[i]);
+ }
+
+ if (km->target == KM_CAM) {
+ /* set info - Color */
+ if (km->info_set) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_INFO_A, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM info A");
+ }
+
+ /* set key length A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_EL_A, index, 0,
+ km->key_word_size + !!km->info_set - 1); /* select id is -1 */
+ /* set Flow Type for Key A */
+ NT_LOG(DBG, FILTER, "Set KM EL A: %i", km->key_word_size + !!km->info_set - 1);
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_FTM_A, index, 0, 1 << km->flow_type);
+
+ NT_LOG(DBG, FILTER, "Set KM FTM A - ft: %i", km->flow_type);
+
+ /* Set Paired - only on the CAM part though... TODO split CAM and TCAM */
+ if ((uint32_t)(km->key_word_size + !!km->info_set) >
+ km->be->km.nb_cam_record_words) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PAIRED, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM CAM Paired");
+ km->cam_paired = 1;
+ }
+
+ } else if (km->target == KM_TCAM) {
+ uint32_t bank_bm = 0;
+
+ if (tcam_find_mapping(km) < 0) {
+ /* failed mapping into TCAM */
+ NT_LOG(DBG, FILTER, "INFO: TCAM mapping flow failed");
+ return -1;
+ }
+
+ assert((uint32_t)(km->tcam_start_bank + km->key_word_size) <=
+ km->be->km.nb_tcam_banks);
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ bank_bm |=
+ (1 << (km->be->km.nb_tcam_banks - 1 - (km->tcam_start_bank + i)));
+ }
+
+ /* Set BANK_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_BANK_A, index, 0, bank_bm);
+ /* Set Kl_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_KL_A, index, 0, km->key_word_size - 1);
+
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cam_populate(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ km->entry_word[i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = km;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1,
+ km->entry_word[km->be->km.nb_cam_record_words + i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = km;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+
+ return res;
+}
+
+static int cam_reset_entry(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = NULL;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = NULL;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+ return res;
+}
+
+static int move_cuckoo_index(struct km_flow_def_s *km)
+{
+ assert(km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner);
+
+ for (uint32_t bank = 0; bank < km->be->km.nb_cam_banks; bank++) {
+ /* It will not select itself */
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner == NULL) {
+ if (km->cam_paired) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner != NULL)
+ continue;
+ }
+
+ /*
+ * Populate in new position
+ */
+ int res = cam_populate(km, bank);
+
+ if (res) {
+ NT_LOG(DBG, FILTER,
+ "Error: failed to write to KM CAM in cuckoo move");
+ return 0;
+ }
+
+ /*
+ * Reset/free entry in old bank
+ * HW flushes are really not needed, the old addresses are always taken
+ * over by the caller If you change this code in future updates, this may
+ * no longer be true then!
+ */
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = NULL;
+
+ if (km->cam_paired)
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "KM Cuckoo hash moved from bank %i to bank %i (%04X => %04X)",
+ km->bank_used, bank, CAM_KM_DIST_IDX(km->bank_used),
+ CAM_KM_DIST_IDX(bank));
+ km->bank_used = bank;
+ (*km->cuckoo_moves)++;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx, int levels,
+ int cam_adr_list_len)
+{
+ struct km_flow_def_s *km = km_parent->cam_dist[bank_idx].km_owner;
+
+ assert(levels <= CUCKOO_MOVE_MAX_DEPTH);
+
+ /*
+ * Only move if same pairness
+ * Can be extended later to handle both move of paired and single entries
+ */
+ if (!km || km_parent->cam_paired != km->cam_paired)
+ return 0;
+
+ if (move_cuckoo_index(km))
+ return 1;
+
+ if (levels <= 1)
+ return 0;
+
+ assert(cam_adr_list_len < CUCKOO_MOVE_MAX_DEPTH);
+
+ cam_addr_reserved_stack[cam_adr_list_len++] = bank_idx;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ int reserved = 0;
+ int new_idx = CAM_KM_DIST_IDX(i);
+
+ for (int i_reserved = 0; i_reserved < cam_adr_list_len; i_reserved++) {
+ if (cam_addr_reserved_stack[i_reserved] == new_idx) {
+ reserved = 1;
+ break;
+ }
+ }
+
+ if (reserved)
+ continue;
+
+ int res = move_cuckoo_index_level(km, new_idx, levels - 1, cam_adr_list_len);
+
+ if (res) {
+ if (move_cuckoo_index(km))
+ return 1;
+
+ assert(0);
+ }
+ }
+
+ return 0;
+}
+
+static int km_write_data_to_cam(struct km_flow_def_s *km)
+{
+ int res = 0;
+ assert(km->be->km.nb_cam_banks <= MAX_BANKS);
+ assert(km->cam_dist);
+
+ NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
+ km->record_indexes[1], km->record_indexes[2]);
+
+ if (km->info_set)
+ km->entry_word[km->key_word_size] = km->info; /* finally set info */
+
+ int bank = -1;
+
+ /*
+ * first step, see if any of the banks are free
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(i_bank)].km_owner == NULL) {
+ if (km->cam_paired == 0 ||
+ km->cam_dist[CAM_KM_DIST_IDX(i_bank) + 1].km_owner == NULL) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0) {
+ /*
+ * Second step - cuckoo move existing flows if possible
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (move_cuckoo_index_level(km, CAM_KM_DIST_IDX(i_bank), 4, 0)) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0)
+ return -1;
+
+ /* populate CAM */
+ NT_LOG(DBG, FILTER, "KM Bank = %i (addr %04X)", bank, CAM_KM_DIST_IDX(bank));
+ res = cam_populate(km, bank);
+
+ if (res == 0) {
+ km->flushed_to_target = 1;
+ km->bank_used = bank;
+ }
+
+ return res;
+}
+
+/*
+ * TCAM
+ */
+static int tcam_find_free_record(struct km_flow_def_s *km, int start_bank)
+{
+ for (uint32_t rec = 0; rec < km->be->km.nb_tcam_bank_width; rec++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank, rec)].km_owner == NULL) {
+ int pass = 1;
+
+ for (int ii = 1; ii < km->key_word_size; ii++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank + ii, rec)].km_owner !=
+ NULL) {
+ pass = 0;
+ break;
+ }
+ }
+
+ if (pass) {
+ km->tcam_record = rec;
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int tcam_find_mapping(struct km_flow_def_s *km)
+{
+ /* Search record and start index for this flow */
+ for (int bs_idx = 0; bs_idx < km->num_start_offsets; bs_idx++) {
+ if (tcam_find_free_record(km, km->start_offsets[bs_idx])) {
+ km->tcam_start_bank = km->start_offsets[bs_idx];
+ NT_LOG(DBG, FILTER, "Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static int tcam_write_word(struct km_flow_def_s *km, int bank, int record, uint32_t word,
+ uint32_t mask)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ uint8_t a = (uint8_t)((word >> (24 - (byte * 8))) & 0xff);
+ uint8_t a_m = (uint8_t)((mask >> (24 - (byte * 8))) & 0xff);
+ /* calculate important value bits */
+ a = a & a_m;
+
+ for (int val = 0; val < 256; val++) {
+ err |= hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if ((val & a_m) == a)
+ all_recs[rec_val] |= rec_bit;
+ else
+ all_recs[rec_val] &= ~rec_bit;
+
+ err |= hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ /* flush bank */
+ err |= hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+
+ if (err == 0) {
+ assert(km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner == NULL);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = km;
+ }
+
+ return err;
+}
+
+static int km_write_data_to_tcam(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_record < 0) {
+ tcam_find_free_record(km, km->tcam_start_bank);
+
+ if (km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER, "Reused RCP: Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ }
+
+ /* Write KM_TCI */
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record,
+ km->info);
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record,
+ km->flow_type);
+ err |= hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++) {
+ err = tcam_write_word(km, km->tcam_start_bank + i, km->tcam_record,
+ km->entry_word[i], km->entry_mask[i]);
+ }
+
+ if (err == 0)
+ km->flushed_to_target = 1;
+
+ return err;
+}
+
+static int tcam_reset_bank(struct km_flow_def_s *km, int bank, int record)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ for (int val = 0; val < 256; val++) {
+ err = hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+
+ all_recs[rec_val] &= ~rec_bit;
+ err = hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ if (err)
+ return err;
+
+ /* flush bank */
+ err = hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER, "Reset TCAM bank %i, rec_val %i rec bit %08x", bank, rec_val,
+ rec_bit);
+
+ return err;
+}
+
+static int tcam_reset_entry(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_start_bank < 0 || km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ /* Write KM_TCI */
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++)
+ err = tcam_reset_bank(km, km->tcam_start_bank + i, km->tcam_record);
+
+ return err;
+}
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color)
+{
+ int res = -1;
+
+ km->info = color;
+ NT_LOG(DBG, FILTER, "Write Data entry Color: %08x", color);
+
+ switch (km->target) {
+ case KM_CAM:
+ res = km_write_data_to_cam(km);
+ break;
+
+ case KM_TCAM:
+ res = km_write_data_to_tcam(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ break;
+ }
+
+ return res;
+}
+
+int km_clear_data_match_entry(struct km_flow_def_s *km)
+{
+ int res = 0;
+
+ if (km->root) {
+ struct km_flow_def_s *km1 = km->root;
+
+ while (km1->reference != km)
+ km1 = km1->reference;
+
+ km1->reference = km->reference;
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->reference) {
+ km->reference->root = NULL;
+
+ switch (km->target) {
+ case KM_CAM:
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = km->reference;
+
+ if (km->key_word_size + !!km->info_set > 1) {
+ assert(km->cam_paired);
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner =
+ km->reference;
+ }
+
+ break;
+
+ case KM_TCAM:
+ for (int i = 0; i < km->key_word_size; i++) {
+ km->tcam_dist[TCAM_DIST_IDX(km->tcam_start_bank + i,
+ km->tcam_record)]
+ .km_owner = km->reference;
+ }
+
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->flushed_to_target) {
+ switch (km->target) {
+ case KM_CAM:
+ res = cam_reset_entry(km, km->bank_used);
+ break;
+
+ case KM_TCAM:
+ res = tcam_reset_entry(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
index 532884ca01..b8a30671c3 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
@@ -165,6 +165,240 @@ int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
return be->iface->km_rcp_flush(be->be_dev, &be->km, start_idx, count);
}
+static int hw_mod_km_rcp_mod(struct flow_api_backend_s *be, enum hw_km_e field, int index,
+ int word_off, uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->km.nb_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.rcp[index], (uint8_t)*value, sizeof(struct km_v7_rcp_s));
+ break;
+
+ case HW_KM_RCP_QW0_DYN:
+ GET_SET(be->km.v7.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_b, value);
+ break;
+
+ case HW_KM_RCP_QW4_DYN:
+ GET_SET(be->km.v7.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW8_DYN:
+ GET_SET(be->km.v7.rcp[index].dw8_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW8_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw8_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW10_DYN:
+ GET_SET(be->km.v7.rcp[index].dw10_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW10_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw10_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_b, value);
+ break;
+
+ case HW_KM_RCP_SWX_CCH:
+ GET_SET(be->km.v7.rcp[index].swx_cch, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_A:
+ GET_SET(be->km.v7.rcp[index].swx_sel_a, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_B:
+ GET_SET(be->km.v7.rcp[index].swx_sel_b, value);
+ break;
+
+ case HW_KM_RCP_MASK_A:
+ if (word_off > KM_RCP_MASK_D_A_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_d_a[word_off], value);
+ break;
+
+ case HW_KM_RCP_MASK_B:
+ if (word_off > KM_RCP_MASK_B_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_b[word_off], value);
+ break;
+
+ case HW_KM_RCP_DUAL:
+ GET_SET(be->km.v7.rcp[index].dual, value);
+ break;
+
+ case HW_KM_RCP_PAIRED:
+ GET_SET(be->km.v7.rcp[index].paired, value);
+ break;
+
+ case HW_KM_RCP_EL_A:
+ GET_SET(be->km.v7.rcp[index].el_a, value);
+ break;
+
+ case HW_KM_RCP_EL_B:
+ GET_SET(be->km.v7.rcp[index].el_b, value);
+ break;
+
+ case HW_KM_RCP_INFO_A:
+ GET_SET(be->km.v7.rcp[index].info_a, value);
+ break;
+
+ case HW_KM_RCP_INFO_B:
+ GET_SET(be->km.v7.rcp[index].info_b, value);
+ break;
+
+ case HW_KM_RCP_FTM_A:
+ GET_SET(be->km.v7.rcp[index].ftm_a, value);
+ break;
+
+ case HW_KM_RCP_FTM_B:
+ GET_SET(be->km.v7.rcp[index].ftm_b, value);
+ break;
+
+ case HW_KM_RCP_BANK_A:
+ GET_SET(be->km.v7.rcp[index].bank_a, value);
+ break;
+
+ case HW_KM_RCP_BANK_B:
+ GET_SET(be->km.v7.rcp[index].bank_b, value);
+ break;
+
+ case HW_KM_RCP_KL_A:
+ GET_SET(be->km.v7.rcp[index].kl_a, value);
+ break;
+
+ case HW_KM_RCP_KL_B:
+ GET_SET(be->km.v7.rcp[index].kl_b, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_A:
+ GET_SET(be->km.v7.rcp[index].keyway_a, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_B:
+ GET_SET(be->km.v7.rcp[index].keyway_b, value);
+ break;
+
+ case HW_KM_RCP_SYNERGY_MODE:
+ GET_SET(be->km.v7.rcp[index].synergy_mode, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw0_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw0_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw2_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw2_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw4_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw4_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw5_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw5_b_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, &value, 0);
+}
+
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, value, 1);
+}
+
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -180,6 +414,103 @@ int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_cam_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_cam_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ if ((unsigned int)bank >= be->km.nb_cam_banks) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ if ((unsigned int)record >= be->km.nb_cam_records) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ unsigned int index = bank * be->km.nb_cam_records + record;
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_CAM_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.cam[index], (uint8_t)*value, sizeof(struct km_v7_cam_s));
+ break;
+
+ case HW_KM_CAM_W0:
+ GET_SET(be->km.v7.cam[index].w0, value);
+ break;
+
+ case HW_KM_CAM_W1:
+ GET_SET(be->km.v7.cam[index].w1, value);
+ break;
+
+ case HW_KM_CAM_W2:
+ GET_SET(be->km.v7.cam[index].w2, value);
+ break;
+
+ case HW_KM_CAM_W3:
+ GET_SET(be->km.v7.cam[index].w3, value);
+ break;
+
+ case HW_KM_CAM_W4:
+ GET_SET(be->km.v7.cam[index].w4, value);
+ break;
+
+ case HW_KM_CAM_W5:
+ GET_SET(be->km.v7.cam[index].w5, value);
+ break;
+
+ case HW_KM_CAM_FT0:
+ GET_SET(be->km.v7.cam[index].ft0, value);
+ break;
+
+ case HW_KM_CAM_FT1:
+ GET_SET(be->km.v7.cam[index].ft1, value);
+ break;
+
+ case HW_KM_CAM_FT2:
+ GET_SET(be->km.v7.cam[index].ft2, value);
+ break;
+
+ case HW_KM_CAM_FT3:
+ GET_SET(be->km.v7.cam[index].ft3, value);
+ break;
+
+ case HW_KM_CAM_FT4:
+ GET_SET(be->km.v7.cam[index].ft4, value);
+ break;
+
+ case HW_KM_CAM_FT5:
+ GET_SET(be->km.v7.cam[index].ft5, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_cam_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count)
{
if (count == ALL_ENTRIES)
@@ -273,6 +604,12 @@ int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int ba
return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 0);
}
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set)
+{
+ return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 1);
+}
+
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -288,6 +625,49 @@ int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_tci_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_tci_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ unsigned int index = bank * be->km.nb_tcam_bank_width + record;
+
+ if (index >= (be->km.nb_tcam_banks * be->km.nb_tcam_bank_width)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_TCI_COLOR:
+ GET_SET(be->km.v7.tci[index].color, value);
+ break;
+
+ case HW_KM_TCI_FT:
+ GET_SET(be->km.v7.tci[index].ft, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_tci_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 464c2fa81c..feac15cd9f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -39,7 +39,19 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_km_rcp {
+ struct hw_db_inline_km_rcp_data data;
+ int ref;
+
+ struct hw_db_inline_resource_db_km_ft {
+ struct hw_db_inline_km_ft_data data;
+ int ref;
+ } *ft;
+ } *km;
+
uint32_t nb_cat;
+ uint32_t nb_km_ft;
+ uint32_t nb_km_rcp;
/* Hardware */
@@ -90,6 +102,25 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_km_ft = ndev->be.cat.nb_flow_types;
+ db->nb_km_rcp = ndev->be.km.nb_categories;
+ db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
+
+ if (db->km == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ db->km[i].ft = calloc(db->nb_km_ft * db->nb_cat,
+ sizeof(struct hw_db_inline_resource_db_km_ft));
+
+ if (db->km[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
*db_handle = db;
return 0;
}
@@ -103,6 +134,13 @@ void hw_db_inline_destroy(void *db_handle)
free(db->slc_lr);
free(db->cat);
+ if (db->km) {
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
+ free(db->km[i].ft);
+
+ free(db->km);
+ }
+
free(db->cfn);
free(db);
@@ -133,12 +171,61 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_KM_RCP:
+ hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
+ break;
+
default:
break;
}
}
}
+
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type != type)
+ continue;
+
+ switch (type) {
+ case HW_DB_IDX_TYPE_NONE:
+ return NULL;
+
+ case HW_DB_IDX_TYPE_CAT:
+ return &db->cat[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_QSL:
+ return &db->qsl[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_COT:
+ return &db->cot[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_SLC_LR:
+ return &db->slc_lr[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_KM_RCP:
+ return &db->km[idxs[i].id1].data;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
+ default:
+ return NULL;
+ }
+ }
+
+ return NULL;
+}
+
/******************************************************************************/
/* Filter */
/******************************************************************************/
@@ -613,3 +700,150 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cat[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* KM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_km_compare(const struct hw_db_inline_km_rcp_data *data1,
+ const struct hw_db_inline_km_rcp_data *data2)
+{
+ return data1->rcp == data2->rcp;
+}
+
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_km_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_RCP;
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ if (!found && db->km[i].ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (db->km[i].ref > 0 && hw_db_inline_km_compare(data, &db->km[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->km[idx.id1].data, data, sizeof(struct hw_db_inline_km_rcp_data));
+ db->km[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->km[idx.id1].ref += 1;
+}
+
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ (void)db_handle;
+
+ if (idx.error)
+ return;
+}
+
+/******************************************************************************/
+/* KM FT */
+/******************************************************************************/
+
+static int hw_db_inline_km_ft_compare(const struct hw_db_inline_km_ft_data *data1,
+ const struct hw_db_inline_km_ft_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[data->km.id1];
+ struct hw_db_km_ft idx = { .raw = 0 };
+ uint32_t cat_offset = data->cat.ids * db->nb_cat;
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_FT;
+ idx.id2 = data->km.id1;
+ idx.id3 = data->cat.ids;
+
+ if (km_rcp->data.rcp == 0) {
+ idx.id1 = 0;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_km_ft; ++i) {
+ const struct hw_db_inline_resource_db_km_ft *km_ft = &km_rcp->ft[cat_offset + i];
+
+ if (!found && km_ft->ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (km_ft->ref > 0 && hw_db_inline_km_ft_compare(data, &km_ft->data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&km_rcp->ft[cat_offset + idx.id1].data, data,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error) {
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+ db->km[idx.id2].ft[cat_offset + idx.id1].ref += 1;
+ }
+}
+
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[idx.id2];
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+
+ if (idx.error)
+ return;
+
+ km_rcp->ft[cat_offset + idx.id1].ref -= 1;
+
+ if (km_rcp->ft[cat_offset + idx.id1].ref <= 0) {
+ memset(&km_rcp->ft[cat_offset + idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index d0435acaef..e104ba7327 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_action_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cot_idx {
HW_DB_IDX;
};
@@ -48,12 +52,22 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_km_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_km_ft {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_KM_FT,
};
/* Functionality data types */
@@ -123,6 +137,16 @@ struct hw_db_inline_action_set_data {
};
};
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -130,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle);
void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
uint32_t size);
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
@@ -158,6 +184,18 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
/**/
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data);
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data);
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+
+/**/
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9b504217d2..811659118d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2336,17 +2336,28 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub,
struct rte_flow_error *error)
{
- (void)dev;
- (void)fd;
(void)group;
- (void)local_idxs;
- (void)local_idx_counter;
(void)flm_rpl_ext_ptr;
(void)flm_ft;
(void)flm_scrub;
- (void)qsl_data;
(void)hsh_data;
- (void)error;
+
+ const bool empty_pattern = fd_has_empty_pattern(fd);
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
+ local_idxs[(*local_idx_counter)++] = cot_idx.raw;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
/* Finalize QSL */
struct hw_db_qsl_idx qsl_idx =
@@ -2448,6 +2459,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ int identical_km_entry_ft = -1;
+
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -2522,6 +2535,130 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ /* Setup KM RCP */
+ struct hw_db_inline_km_rcp_data km_rcp_data = { .rcp = 0 };
+
+ if (fd->km.num_ftype_elem) {
+ struct flow_handle *flow = dev->ndev->flow_base, *found_flow = NULL;
+
+ if (km_key_create(&fd->km, fh->port_id)) {
+ NT_LOG(ERR, FILTER, "KM creation failed");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.be = &dev->ndev->be;
+
+ /* Look for existing KM RCPs */
+ while (flow) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW &&
+ flow->fd->km.flow_type) {
+ int res = km_key_compare(&fd->km, &flow->fd->km);
+
+ if (res < 0) {
+ /* Flow rcp and match data is identical */
+ identical_km_entry_ft = flow->fd->km.flow_type;
+ found_flow = flow;
+ break;
+ }
+
+ if (res > 0) {
+ /* Flow rcp found and match data is different */
+ found_flow = flow;
+ }
+ }
+
+ flow = flow->next;
+ }
+
+ km_attach_ndev_resource_management(&fd->km, &dev->ndev->km_res_handle);
+
+ if (found_flow != NULL) {
+ /* Reuse existing KM RCP */
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)
+ found_flow->flm_db_idxs,
+ found_flow->flm_db_idx_counter);
+
+ if (other_km_rcp_data == NULL ||
+ flow_nic_ref_resource(dev->ndev, RES_KM_CATEGORY,
+ other_km_rcp_data->rcp)) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference existing KM RCP resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_data.rcp = other_km_rcp_data->rcp;
+ } else {
+ /* Alloc new KM RCP */
+ int rcp = flow_nic_alloc_resource(dev->ndev, RES_KM_CATEGORY, 1);
+
+ if (rcp < 0) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference KM RCP resource (flow_nic_alloc)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_set(&fd->km, rcp);
+ km_rcp_data.rcp = (uint32_t)rcp;
+ }
+ }
+
+ struct hw_db_km_idx km_idx =
+ hw_db_inline_km_add(dev->ndev, dev->ndev->hw_db_handle, &km_rcp_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = km_idx.raw;
+
+ if (km_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM RCP resource (db_inline)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Setup KM FT */
+ struct hw_db_inline_km_ft_data km_ft_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ };
+ struct hw_db_km_ft km_ft_idx =
+ hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = km_ft_idx.raw;
+
+ if (km_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Finalize KM RCP */
+ if (fd->km.num_ftype_elem) {
+ if (identical_km_entry_ft >= 0 && identical_km_entry_ft != km_ft_idx.id1) {
+ NT_LOG(ERR, FILTER,
+ "Identical KM matches cannot have different KM FTs");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.flow_type = km_ft_idx.id1;
+
+ if (fd->km.target == KM_CAM) {
+ uint32_t ft_a_mask = 0;
+ hw_mod_km_rcp_get(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0, &ft_a_mask);
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0,
+ ft_a_mask | (1 << fd->km.flow_type));
+ }
+
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)km_rcp_data.rcp, 1);
+
+ km_write_data_match_entry(&fd->km, 0);
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2802,6 +2939,25 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
} else {
NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->fd->km.num_ftype_elem) {
+ km_clear_data_match_entry(&fh->fd->km);
+
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ if (other_km_rcp_data != NULL &&
+ flow_nic_deref_resource(dev->ndev, RES_KM_CATEGORY,
+ (int)other_km_rcp_data->rcp) == 0) {
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_PRESET_ALL,
+ (int)other_km_rcp_data->rcp, 0, 0);
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)other_km_rcp_data->rcp,
+ 1);
+ }
+ }
+
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 31/73] net/ntnic: add hash API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (29 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 30/73] net/ntnic: add KM module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 32/73] net/ntnic: add TPE module Serhii Iliushyk
` (45 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Hasher module calculates a configurable hash value
to be used internally by the FPGA.
The module support both Toeplitz and NT-hash.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 40 +
drivers/net/ntnic/include/flow_api_engine.h | 17 +
drivers/net/ntnic/include/hw_mod_backend.h | 20 +
.../ntnic/include/stream_binary_flow_api.h | 25 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 212 +++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 ++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 25 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 ++++
.../profile_inline/flow_api_hw_db_inline.c | 142 +++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 849 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 4 +
drivers/net/ntnic/ntnic_mod_reg.h | 4 +
15 files changed, 1705 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index edffd0a57a..2e96fa5bed 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -29,6 +29,37 @@ struct hw_mod_resource_s {
*/
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev);
+/**
+ * A structure used to configure the Receive Side Scaling (RSS) feature
+ * of an Ethernet port.
+ */
+struct nt_eth_rss_conf {
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t rss_key[MAX_RSS_KEY_LEN];
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
+ /**
+ * Hash algorithm.
+ */
+ enum rte_eth_hash_function algorithm;
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask);
+
struct flow_eth_dev {
/* NIC that owns this port device */
struct flow_nic_dev *ndev;
@@ -49,6 +80,11 @@ struct flow_eth_dev {
struct flow_eth_dev *next;
};
+enum flow_nic_hash_e {
+ HASH_ALGO_ROUND_ROBIN = 0,
+ HASH_ALGO_5TUPLE,
+};
+
/* registered NIC backends */
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
@@ -191,4 +227,8 @@ void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm);
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index a0f02f4e8a..e52363f04e 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,7 @@ struct km_flow_def_s {
int bank_used;
uint32_t *cuckoo_moves; /* for CAM statistics only */
struct cam_distrib_s *cam_dist;
+ struct hasher_s *hsh;
/* TCAM specific bank management */
struct tcam_distrib_s *tcam_dist;
@@ -136,6 +137,17 @@ struct km_flow_def_s {
int tcam_record;
};
+/*
+ * RSS configuration, see struct rte_flow_action_rss
+ */
+struct hsh_def_s {
+ enum rte_eth_hash_function func; /* RSS hash function to apply */
+ /* RSS hash types, see definition of RTE_ETH_RSS_* for hash calculation options */
+ uint64_t types;
+ uint32_t key_len; /* Hash key length in bytes. */
+ const uint8_t *key; /* Hash key. */
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -247,6 +259,11 @@ struct nic_flow_def {
* Key Matcher flow definitions
*/
struct km_flow_def_s km;
+
+ /*
+ * Hash module RSS definitions
+ */
+ struct hsh_def_s hsh;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 26903f2183..cee148807a 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -149,14 +149,27 @@ enum km_flm_if_select_e {
int debug
enum frame_offs_e {
+ DYN_SOF = 0,
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
+ DYN_MPLS = 3,
DYN_L3 = 4,
+ DYN_ID_IPV4_6 = 5,
+ DYN_FINAL_IP_DST = 6,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
+ DYN_TUN_PAYLOAD = 9,
+ DYN_TUN_L2 = 10,
+ DYN_TUN_VLAN = 11,
+ DYN_TUN_MPLS = 12,
DYN_TUN_L3 = 13,
+ DYN_TUN_ID_IPV4_6 = 14,
+ DYN_TUN_FINAL_IP_DST = 15,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ DYN_EOF = 18,
+ DYN_L3_PAYLOAD_END = 19,
+ DYN_TUN_L3_PAYLOAD_END = 20,
SB_VNI = SWX_INFO | 1,
SB_MAC_PORT = SWX_INFO | 2,
SB_KCC_ID = SWX_INFO | 3
@@ -227,6 +240,11 @@ enum {
};
+enum {
+ HASH_HASH_NONE = 0,
+ HASH_5TUPLE = 8,
+};
+
enum {
CPY_SELECT_DSCP_IPV4 = 0,
CPY_SELECT_DSCP_IPV6 = 1,
@@ -670,6 +688,8 @@ int hw_mod_hsh_alloc(struct flow_api_backend_s *be);
void hw_mod_hsh_free(struct flow_api_backend_s *be);
int hw_mod_hsh_reset(struct flow_api_backend_s *be);
int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value);
struct qsl_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 8097518d61..e5fe686d99 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,6 +12,31 @@
/* Max RSS hash key length in bytes */
#define MAX_RSS_KEY_LEN 40
+/* NT specific MASKs for RSS configuration */
+/* NOTE: Masks are required for correct RSS configuration, do not modify them! */
+#define NT_ETH_RSS_IPV4_MASK \
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define NT_ETH_RSS_IPV6_MASK \
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NT_ETH_RSS_IP_MASK \
+ (NT_ETH_RSS_IPV4_MASK | NT_ETH_RSS_IPV6_MASK | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+/* List of all RSS flags supported for RSS calculation offload */
+#define NT_ETH_RSS_OFFLOAD_MASK \
+ (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM | RTE_ETH_RSS_PORT | RTE_ETH_RSS_GTPU)
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e1fef37ccb..d7e6d05556 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -56,6 +56,7 @@ sources = files(
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
+ 'nthw/flow_api/flow_hasher.c',
'nthw/flow_api/flow_kcc.c',
'nthw/flow_api/flow_km.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 4303a2c759..c6b818a36b 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,8 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "ntlog.h"
+#include "nt_util.h"
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
@@ -12,6 +14,11 @@
#define SCATTER_GATHER
+#define RSS_TO_STRING(name) \
+ { \
+ name, #name \
+ }
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -817,6 +824,211 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
return ndev->be.be_dev;
}
+/* Information for a given RSS type. */
+struct rss_type_info {
+ uint64_t rss_type;
+ const char *str;
+};
+
+static struct rss_type_info rss_to_string[] = {
+ /* RTE_BIT64(2) IPv4 dst + IPv4 src */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4),
+ /* RTE_BIT64(3) IPv4 dst + IPv4 src + Identification of group of fragments */
+ RSS_TO_STRING(RTE_ETH_RSS_FRAG_IPV4),
+ /* RTE_BIT64(4) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_TCP),
+ /* RTE_BIT64(5) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_UDP),
+ /* RTE_BIT64(6) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_SCTP),
+ /* RTE_BIT64(7) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_OTHER),
+ /*
+ * RTE_BIT64(14) 128-bits of L2 payload starting after src MAC, i.e. including optional
+ * VLAN tag and ethertype. Overrides all L3 and L4 flags at the same level, but inner
+ * L2 payload can be combined with outer S-VLAN and GTPU TEID flags.
+ */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_PAYLOAD),
+ /* RTE_BIT64(18) L4 dst + L4 src + L4 protocol - see comment of RTE_ETH_RSS_L4_CHKSUM */
+ RSS_TO_STRING(RTE_ETH_RSS_PORT),
+ /* RTE_BIT64(19) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_VXLAN),
+ /* RTE_BIT64(20) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_GENEVE),
+ /* RTE_BIT64(21) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_NVGRE),
+ /* RTE_BIT64(23) GTP TEID - always from outer GTPU header */
+ RSS_TO_STRING(RTE_ETH_RSS_GTPU),
+ /* RTE_BIT64(24) MAC dst + MAC src */
+ RSS_TO_STRING(RTE_ETH_RSS_ETH),
+ /* RTE_BIT64(25) outermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_S_VLAN),
+ /* RTE_BIT64(26) innermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_C_VLAN),
+ /* RTE_BIT64(27) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ESP),
+ /* RTE_BIT64(28) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_AH),
+ /* RTE_BIT64(29) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV3),
+ /* RTE_BIT64(30) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PFCP),
+ /* RTE_BIT64(31) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PPPOE),
+ /* RTE_BIT64(32) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ECPRI),
+ /* RTE_BIT64(33) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_MPLS),
+ /* RTE_BIT64(34) IPv4 Header checksum + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4_CHKSUM),
+
+ /*
+ * if combined with RTE_ETH_RSS_NONFRAG_IPV4_[TCP|UDP|SCTP] then
+ * L4 protocol + chosen protocol header Checksum
+ * else
+ * error
+ */
+ /* RTE_BIT64(35) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_CHKSUM),
+#ifndef ANDROMEDA_DPDK_21_11
+ /* RTE_BIT64(36) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV2),
+#endif
+
+ { RTE_BIT64(37), "unknown_RTE_BIT64(37)" },
+ { RTE_BIT64(38), "unknown_RTE_BIT64(38)" },
+ { RTE_BIT64(39), "unknown_RTE_BIT64(39)" },
+ { RTE_BIT64(40), "unknown_RTE_BIT64(40)" },
+ { RTE_BIT64(41), "unknown_RTE_BIT64(41)" },
+ { RTE_BIT64(42), "unknown_RTE_BIT64(42)" },
+ { RTE_BIT64(43), "unknown_RTE_BIT64(43)" },
+ { RTE_BIT64(44), "unknown_RTE_BIT64(44)" },
+ { RTE_BIT64(45), "unknown_RTE_BIT64(45)" },
+ { RTE_BIT64(46), "unknown_RTE_BIT64(46)" },
+ { RTE_BIT64(47), "unknown_RTE_BIT64(47)" },
+ { RTE_BIT64(48), "unknown_RTE_BIT64(48)" },
+ { RTE_BIT64(49), "unknown_RTE_BIT64(49)" },
+
+ /* RTE_BIT64(50) outermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_OUTERMOST),
+ /* RTE_BIT64(51) innermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_INNERMOST),
+
+ /* RTE_BIT64(52) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE96),
+ /* RTE_BIT64(53) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE64),
+ /* RTE_BIT64(54) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE56),
+ /* RTE_BIT64(55) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE48),
+ /* RTE_BIT64(56) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE40),
+ /* RTE_BIT64(57) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE32),
+
+ /* RTE_BIT64(58) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_DST_ONLY),
+ /* RTE_BIT64(59) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_SRC_ONLY),
+ /* RTE_BIT64(60) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_DST_ONLY),
+ /* RTE_BIT64(61) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_SRC_ONLY),
+ /* RTE_BIT64(62) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_DST_ONLY),
+ /* RTE_BIT64(63) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_SRC_ONLY),
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask)
+{
+ if (str == NULL || str_len == 0)
+ return -1;
+
+ memset(str, 0x0, str_len);
+ uint16_t str_end = 0;
+ const struct rss_type_info *start = rss_to_string;
+
+ for (const struct rss_type_info *p = start; p != start + ARRAY_SIZE(rss_to_string); ++p) {
+ if (p->rss_type & hash_mask) {
+ if (strlen(prefix) + strlen(p->str) < (size_t)(str_len - str_end)) {
+ snprintf(str + str_end, str_len - str_end, "%s", prefix);
+ str_end += strlen(prefix);
+ snprintf(str + str_end, str_len - str_end, "%s", p->str);
+ str_end += strlen(p->str);
+
+ } else {
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Hash
+ */
+
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm)
+{
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ switch (algorithm) {
+ case HASH_ALGO_5TUPLE:
+ /* need to create an IPv6 hashing and enable the adaptive ip mask bit */
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_OFS, hsh_idx, 0, -16);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_PE, hsh_idx, 0, DYN_L4);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_PE, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_P, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 1, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 2, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 3, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 4, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 5, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 6, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 7, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 8, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 9, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_VALID, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_TYPE, hsh_idx, 0, HASH_5TUPLE);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0, 1);
+
+ NT_LOG(DBG, FILTER, "Set IPv6 5-tuple hasher with adaptive IPv4 hashing");
+ break;
+
+ default:
+ case HASH_ALGO_ROUND_ROBIN:
+ /* zero is round-robin */
+ break;
+ }
+
+ return 0;
+}
+
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.c b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
new file mode 100644
index 0000000000..86dfc16e79
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
@@ -0,0 +1,156 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <math.h>
+
+#include "flow_hasher.h"
+
+static uint32_t shuffle(uint32_t x)
+{
+ return ((x & 0x00000002) << 29) | ((x & 0xAAAAAAA8) >> 3) | ((x & 0x15555555) << 3) |
+ ((x & 0x40000000) >> 29);
+}
+
+static uint32_t ror_inv(uint32_t x, const int s)
+{
+ return (x >> s) | ((~x) << (32 - s));
+}
+
+static uint32_t combine(uint32_t x, uint32_t y)
+{
+ uint32_t x1 = ror_inv(x, 15);
+ uint32_t x2 = ror_inv(x, 13);
+ uint32_t y1 = ror_inv(y, 3);
+ uint32_t y2 = ror_inv(y, 27);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint32_t mix(uint32_t x, uint32_t y)
+{
+ return shuffle(combine(x, y));
+}
+
+static uint64_t ror_inv3(uint64_t x)
+{
+ const uint64_t m = 0xE0000000E0000000ULL;
+
+ return ((x >> 3) | m) ^ ((x << 29) & m);
+}
+
+static uint64_t ror_inv13(uint64_t x)
+{
+ const uint64_t m = 0xFFF80000FFF80000ULL;
+
+ return ((x >> 13) | m) ^ ((x << 19) & m);
+}
+
+static uint64_t ror_inv15(uint64_t x)
+{
+ const uint64_t m = 0xFFFE0000FFFE0000ULL;
+
+ return ((x >> 15) | m) ^ ((x << 17) & m);
+}
+
+static uint64_t ror_inv27(uint64_t x)
+{
+ const uint64_t m = 0xFFFFFFE0FFFFFFE0ULL;
+
+ return ((x >> 27) | m) ^ ((x << 5) & m);
+}
+
+static uint64_t shuffle64(uint64_t x)
+{
+ return ((x & 0x0000000200000002) << 29) | ((x & 0xAAAAAAA8AAAAAAA8) >> 3) |
+ ((x & 0x1555555515555555) << 3) | ((x & 0x4000000040000000) >> 29);
+}
+
+static uint64_t pair(uint32_t x, uint32_t y)
+{
+ return ((uint64_t)x << 32) | y;
+}
+
+static uint64_t combine64(uint64_t x, uint64_t y)
+{
+ uint64_t x1 = ror_inv15(x);
+ uint64_t x2 = ror_inv13(x);
+ uint64_t y1 = ror_inv3(y);
+ uint64_t y2 = ror_inv27(y);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint64_t mix64(uint64_t x, uint64_t y)
+{
+ return shuffle64(combine64(x, y));
+}
+
+static uint32_t calc16(const uint32_t key[16])
+{
+ /*
+ * 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Layer 0
+ * \./ \./ \./ \./ \./ \./ \./ \./
+ * 0 1 2 3 4 5 6 7 Layer 1
+ * \__.__/ \__.__/ \__.__/ \__.__/
+ * 0 1 2 3 Layer 2
+ * \______.______/ \______.______/
+ * 0 1 Layer 3
+ * \______________.______________/
+ * 0 Layer 4
+ * / \
+ * \./
+ * 0 Layer 5
+ * / \
+ * \./ Layer 6
+ * value
+ */
+
+ uint64_t z;
+ uint32_t x;
+
+ z = mix64(mix64(mix64(pair(key[0], key[8]), pair(key[1], key[9])),
+ mix64(pair(key[2], key[10]), pair(key[3], key[11]))),
+ mix64(mix64(pair(key[4], key[12]), pair(key[5], key[13])),
+ mix64(pair(key[6], key[14]), pair(key[7], key[15]))));
+
+ x = mix((uint32_t)(z >> 32), (uint32_t)z);
+ x = mix(x, ror_inv(x, 17));
+ x = combine(x, ror_inv(x, 17));
+
+ return x;
+}
+
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result)
+{
+ uint64_t val;
+ uint32_t res;
+
+ val = calc16(key);
+ res = (uint32_t)val;
+
+ if (hsh->cam_bw > 32)
+ val = (val << (hsh->cam_bw - 32)) ^ val;
+
+ for (int i = 0; i < hsh->banks; i++) {
+ result[i] = (unsigned int)(val & hsh->cam_records_bw_mask);
+ val = val >> hsh->cam_records_bw;
+ }
+
+ return res;
+}
+
+int init_hasher(struct hasher_s *hsh, int banks, int nb_records)
+{
+ hsh->banks = banks;
+ hsh->cam_records_bw = (int)(log2(nb_records - 1) + 1);
+ hsh->cam_records_bw_mask = (1U << hsh->cam_records_bw) - 1;
+ hsh->cam_bw = hsh->banks * hsh->cam_records_bw;
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.h b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
new file mode 100644
index 0000000000..15de8e9933
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_HASHER_H_
+#define _FLOW_HASHER_H_
+
+#include <stdint.h>
+
+struct hasher_s {
+ int banks;
+ int cam_records_bw;
+ uint32_t cam_records_bw_mask;
+ int cam_bw;
+};
+
+int init_hasher(struct hasher_s *hsh, int _banks, int nb_records);
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result);
+
+#endif /* _FLOW_HASHER_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 30d6ea728e..f79919cb81 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -9,6 +9,7 @@
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
#include "nt_util.h"
+#include "flow_hasher.h"
#define MAX_QWORDS 2
#define MAX_SWORDS 2
@@ -75,10 +76,25 @@ static int tcam_find_mapping(struct km_flow_def_s *km);
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
{
+ /*
+ * KM entries occupied in CAM - to manage the cuckoo shuffling
+ * and manage CAM population and usage
+ * KM entries occupied in TCAM - to manage population and usage
+ */
+ if (!*handle) {
+ *handle = calloc(1,
+ (size_t)CAM_ENTRIES + sizeof(uint32_t) + (size_t)TCAM_ENTRIES +
+ sizeof(struct hasher_s));
+ NT_LOG(DBG, FILTER, "Allocate NIC DEV CAM and TCAM record manager");
+ }
+
km->cam_dist = (struct cam_distrib_s *)*handle;
km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
km->tcam_dist =
(struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+
+ km->hsh = (struct hasher_s *)((char *)km->tcam_dist + TCAM_ENTRIES);
+ init_hasher(km->hsh, km->be->km.nb_cam_banks, km->be->km.nb_cam_records);
}
void km_free_ndev_resource_management(void **handle)
@@ -839,9 +855,18 @@ static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx
static int km_write_data_to_cam(struct km_flow_def_s *km)
{
int res = 0;
+ int val[MAX_BANKS];
assert(km->be->km.nb_cam_banks <= MAX_BANKS);
assert(km->cam_dist);
+ /* word list without info set */
+ gethash(km->hsh, km->entry_word, val);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ /* if paired we start always on an even address - reset bit 0 */
+ km->record_indexes[i] = (km->cam_paired) ? val[i] & ~1 : val[i];
+ }
+
NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
km->record_indexes[1], km->record_indexes[2]);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
index df5c00ac42..1750d09afb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
@@ -89,3 +89,182 @@ int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->hsh_rcp_flush(be->be_dev, &be->hsh, start_idx, count);
}
+
+static int hw_mod_hsh_rcp_mod(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t *value, int get)
+{
+ if (index >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 5:
+ switch (field) {
+ case HW_HSH_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->hsh.v5.rcp[index], (uint8_t)*value,
+ sizeof(struct hsh_v5_rcp_s));
+ break;
+
+ case HW_HSH_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off);
+ break;
+
+ case HW_HSH_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off,
+ be->hsh.nb_rcp);
+ break;
+
+ case HW_HSH_RCP_LOAD_DIST_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].load_dist_type, value);
+ break;
+
+ case HW_HSH_RCP_MAC_PORT_MASK:
+ if (word_off > HSH_RCP_MAC_PORT_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].mac_port_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SORT:
+ GET_SET(be->hsh.v5.rcp[index].sort, value);
+ break;
+
+ case HW_HSH_RCP_QW0_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw0_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_HSH_RCP_QW4_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw4_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_PE:
+ GET_SET(be->hsh.v5.rcp[index].w8_pe, value);
+ break;
+
+ case HW_HSH_RCP_W8_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w8_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w8_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_PE:
+ GET_SET(be->hsh.v5.rcp[index].w9_pe, value);
+ break;
+
+ case HW_HSH_RCP_W9_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w9_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W9_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w9_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_P:
+ GET_SET(be->hsh.v5.rcp[index].w9_p, value);
+ break;
+
+ case HW_HSH_RCP_P_MASK:
+ GET_SET(be->hsh.v5.rcp[index].p_mask, value);
+ break;
+
+ case HW_HSH_RCP_WORD_MASK:
+ if (word_off > HSH_RCP_WORD_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].word_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SEED:
+ GET_SET(be->hsh.v5.rcp[index].seed, value);
+ break;
+
+ case HW_HSH_RCP_TNL_P:
+ GET_SET(be->hsh.v5.rcp[index].tnl_p, value);
+ break;
+
+ case HW_HSH_RCP_HSH_VALID:
+ GET_SET(be->hsh.v5.rcp[index].hsh_valid, value);
+ break;
+
+ case HW_HSH_RCP_HSH_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].hsh_type, value);
+ break;
+
+ case HW_HSH_RCP_TOEPLITZ:
+ GET_SET(be->hsh.v5.rcp[index].toeplitz, value);
+ break;
+
+ case HW_HSH_RCP_K:
+ if (word_off > HSH_RCP_KEY_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].k[word_off], value);
+ break;
+
+ case HW_HSH_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->hsh.v5.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 5 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value)
+{
+ return hw_mod_hsh_rcp_mod(be, field, index, word_off, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index feac15cd9f..8b62ce11dd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -29,9 +29,15 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_hsh {
+ struct hw_db_inline_hsh_data data;
+ int ref;
+ } *hsh;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_hsh;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -121,6 +127,21 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
}
+ db->cfn = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cfn));
+
+ if (db->cfn == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_hsh = ndev->be.hsh.nb_rcp;
+ db->hsh = calloc(db->nb_hsh, sizeof(struct hw_db_inline_resource_db_hsh));
+
+ if (db->hsh == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -132,6 +153,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->hsh);
+
free(db->cat);
if (db->km) {
@@ -179,6 +202,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_HSH:
+ hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -218,6 +245,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_KM_FT:
return NULL; /* FTs can't be easily looked up */
+ case HW_DB_IDX_TYPE_HSH:
+ return &db->hsh[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -246,6 +276,7 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
{
(void)ft;
(void)qsl_hw_id;
+ (void)ft;
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
@@ -847,3 +878,114 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* HSH */
+/******************************************************************************/
+
+static int hw_db_inline_hsh_compare(const struct hw_db_inline_hsh_data *data1,
+ const struct hw_db_inline_hsh_data *data2)
+{
+ for (uint32_t i = 0; i < MAX_RSS_KEY_LEN; ++i)
+ if (data1->key[i] != data2->key[i])
+ return 0;
+
+ return data1->func == data2->func && data1->hash_mask == data2->hash_mask;
+}
+
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_hsh_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_HSH;
+
+ /* check if default hash configuration shall be used, i.e. rss_hf is not set */
+ /*
+ * NOTE: hsh id 0 is reserved for "default"
+ * HSH used by port configuration; All ports share the same default hash settings.
+ */
+ if (data->hash_mask == 0) {
+ idx.ids = 0;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_hsh; ++i) {
+ int ref = db->hsh[i].ref;
+
+ if (ref > 0 && hw_db_inline_hsh_compare(data, &db->hsh[i].data)) {
+ idx.ids = i;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ struct nt_eth_rss_conf tmp_rss_conf;
+
+ tmp_rss_conf.rss_hf = data->hash_mask;
+ memcpy(tmp_rss_conf.rss_key, data->key, MAX_RSS_KEY_LEN);
+ tmp_rss_conf.algorithm = data->func;
+ int res = flow_nic_set_hasher_fields(ndev, idx.ids, tmp_rss_conf);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->hsh[idx.ids].ref = 1;
+ memcpy(&db->hsh[idx.ids].data, data, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, idx.ids);
+
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->hsh[idx.ids].ref += 1;
+}
+
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->hsh[idx.ids].ref -= 1;
+
+ if (db->hsh[idx.ids].ref <= 0) {
+ /*
+ * NOTE: hsh id 0 is reserved for "default" HSH used by
+ * port configuration, so we shall keep it even if
+ * it is not used by any flow
+ */
+ if (idx.ids > 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, idx.ids, 0, 0x0);
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->hsh[idx.ids].data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_free_resource(ndev, RES_HSH_RCP, idx.ids);
+ }
+
+ db->hsh[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index e104ba7327..c97bdef1b7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -60,6 +60,10 @@ struct hw_db_km_ft {
HW_DB_IDX;
};
+struct hw_db_hsh_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
@@ -68,6 +72,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_SLC_LR,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
+ HW_DB_IDX_TYPE_HSH,
};
/* Functionality data types */
@@ -133,6 +138,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_hsh_idx hsh;
};
};
};
@@ -175,6 +181,11 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data);
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 811659118d..2d795e2c7f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -24,6 +24,15 @@
#define NT_VIOLATING_MBR_CFN 0
#define NT_VIOLATING_MBR_QSL 1
+#define RTE_ETH_RSS_UDP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)
+
+#define RTE_ETH_RSS_TCP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX)
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2324,6 +2333,23 @@ static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_d
}
}
+static void setup_db_hsh_data(struct nic_flow_def *fd, struct hw_db_inline_hsh_data *hsh_data)
+{
+ memset(hsh_data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+
+ hsh_data->func = fd->hsh.func;
+ hsh_data->hash_mask = fd->hsh.types;
+
+ if (fd->hsh.key != NULL) {
+ /*
+ * Just a safeguard. Check and error handling of rss_key_len
+ * shall be done at api layers above.
+ */
+ memcpy(&hsh_data->key, fd->hsh.key,
+ fd->hsh.key_len < MAX_RSS_KEY_LEN ? fd->hsh.key_len : MAX_RSS_KEY_LEN);
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
@@ -2340,7 +2366,6 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
(void)flm_rpl_ext_ptr;
(void)flm_ft;
(void)flm_scrub;
- (void)hsh_data;
const bool empty_pattern = fd_has_empty_pattern(fd);
@@ -2370,6 +2395,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle, hsh_data);
+ local_idxs[(*local_idx_counter)++] = hsh_idx.raw;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2419,6 +2455,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
if (attr->group > 0 && fd_has_empty_pattern(fd)) {
/*
@@ -2502,6 +2539,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle,
+ &hsh_data);
+ fh->db_idxs[fh->db_idx_counter++] = hsh_idx.raw;
+ action_set_data.hsh = hsh_idx;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2681,6 +2731,126 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
return NULL;
}
+/*
+ * Public functions
+ */
+
+/*
+ * FPGA uses up to 10 32-bit words (320 bits) for hash calculation + 8 bits for L4 protocol number.
+ * Hashed data are split between two 128-bit Quad Words (QW)
+ * and two 32-bit Words (W), which can refer to different header parts.
+ */
+enum hsh_words_id {
+ HSH_WORDS_QW0 = 0,
+ HSH_WORDS_QW4,
+ HSH_WORDS_W8,
+ HSH_WORDS_W9,
+ HSH_WORDS_SIZE,
+};
+
+/* struct with details about hash QWs & Ws */
+struct hsh_words {
+ /*
+ * index of W (word) or index of 1st word of QW (quad word)
+ * is used for hash mask calculation
+ */
+ uint8_t index;
+ uint8_t toeplitz_index; /* offset in Bytes of given [Q]W inside Toeplitz RSS key */
+ enum hw_hsh_e pe; /* offset to header part, e.g. beginning of L4 */
+ enum hw_hsh_e ofs; /* relative offset in BYTES to 'pe' header offset above */
+ uint16_t bit_len; /* max length of header part in bits to fit into QW/W */
+ bool free; /* only free words can be used for hsh calculation */
+};
+
+static enum hsh_words_id get_free_word(struct hsh_words *words, uint16_t bit_len)
+{
+ enum hsh_words_id ret = HSH_WORDS_SIZE;
+ uint16_t ret_bit_len = UINT16_MAX;
+
+ for (enum hsh_words_id i = HSH_WORDS_QW0; i < HSH_WORDS_SIZE; i++) {
+ if (words[i].free && bit_len <= words[i].bit_len &&
+ words[i].bit_len < ret_bit_len) {
+ ret = i;
+ ret_bit_len = words[i].bit_len;
+ }
+ }
+
+ return ret;
+}
+
+static int flow_nic_set_hasher_part_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct hsh_words *words, uint32_t pe, uint32_t ofs,
+ int bit_len, bool toeplitz)
+{
+ int res = 0;
+
+ /* check if there is any free word, which can accommodate header part of given 'bit_len' */
+ enum hsh_words_id word = get_free_word(words, bit_len);
+
+ if (word == HSH_WORDS_SIZE) {
+ NT_LOG(ERR, FILTER, "Cannot add additional %d bits into hash", bit_len);
+ return -1;
+ }
+
+ words[word].free = false;
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].pe, hsh_idx, 0, pe);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].pe,
+ hsh_idx, pe);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].ofs, hsh_idx, 0, ofs);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].ofs,
+ hsh_idx, ofs);
+
+ /* set HW_HSH_RCP_WORD_MASK based on used QW/W and given 'bit_len' */
+ int mask_bit_len = bit_len;
+ uint32_t mask = 0x0;
+ uint32_t mask_be = 0x0;
+ uint32_t toeplitz_mask[9] = { 0x0 };
+ /* iterate through all words of QW */
+ uint16_t words_count = words[word].bit_len / 32;
+
+ for (uint16_t mask_off = 1; mask_off <= words_count; mask_off++) {
+ if (mask_bit_len >= 32) {
+ mask_bit_len -= 32;
+ mask = 0xffffffff;
+ mask_be = mask;
+
+ } else if (mask_bit_len > 0) {
+ /* keep bits from left to right, i.e. little to big endian */
+ mask_be = 0xffffffff >> (32 - mask_bit_len);
+ mask = mask_be << (32 - mask_bit_len);
+ mask_bit_len = 0;
+
+ } else {
+ mask = 0x0;
+ mask_be = 0x0;
+ }
+
+ /* reorder QW words mask from little to big endian */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx,
+ words[word].index + words_count - mask_off, mask);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, words[word].index + words_count - mask_off, mask);
+ toeplitz_mask[words[word].toeplitz_index + mask_off - 1] = mask_be;
+ }
+
+ if (toeplitz) {
+ NT_LOG(DBG, FILTER,
+ "Partial Toeplitz RSS key mask: %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 "",
+ toeplitz_mask[8], toeplitz_mask[7], toeplitz_mask[6], toeplitz_mask[5],
+ toeplitz_mask[4], toeplitz_mask[3], toeplitz_mask[2], toeplitz_mask[1],
+ toeplitz_mask[0]);
+ NT_LOG(DBG, FILTER,
+ " MSB LSB");
+ }
+
+ return res;
+}
+
/*
* Public functions
*/
@@ -2731,6 +2901,12 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+ /* Set default hasher recipe to 5-tuple */
+ flow_nic_set_hasher(ndev, 0, HASH_ALGO_5TUPLE);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -2797,6 +2973,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, 0, 0, 0);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
@@ -2994,6 +3174,672 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
+{
+ return (hash_mask & hash_bits) == hash_bits;
+}
+
+static __rte_always_inline void unset_bits(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ *hash_mask &= ~hash_bits;
+}
+
+static __rte_always_inline void unset_bits_and_log(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", *hash_mask & hash_bits) == 0)
+ NT_LOG(DBG, FILTER, "Configured RSS types:%s", rss_buffer);
+
+ unset_bits(hash_mask, hash_bits);
+}
+
+static __rte_always_inline void unset_bits_if_all_enabled(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ if (all_bits_enabled(*hash_mask, hash_bits))
+ unset_bits(hash_mask, hash_bits);
+}
+
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ uint64_t fields = rss_conf.rss_hf;
+
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", fields) == 0)
+ NT_LOG(DBG, FILTER, "Requested RSS types:%s", rss_buffer);
+
+ /*
+ * configure all (Q)Words usable for hash calculation
+ * Hash can be calculated from 4 independent header parts:
+ * | QW0 | Qw4 | W8| W9|
+ * word | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
+ */
+ struct hsh_words words[HSH_WORDS_SIZE] = {
+ { 0, 5, HW_HSH_RCP_QW0_PE, HW_HSH_RCP_QW0_OFS, 128, true },
+ { 4, 1, HW_HSH_RCP_QW4_PE, HW_HSH_RCP_QW4_OFS, 128, true },
+ { 8, 0, HW_HSH_RCP_W8_PE, HW_HSH_RCP_W8_OFS, 32, true },
+ {
+ 9, 255, HW_HSH_RCP_W9_PE, HW_HSH_RCP_W9_OFS, 32,
+ true
+ }, /* not supported for Toeplitz */
+ };
+
+ int res = 0;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+ /* enable hashing */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+
+ /* configure selected hash function and its key */
+ bool toeplitz = false;
+
+ switch (rss_conf.algorithm) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /* Use default NTH10 hashing algorithm */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 0);
+ /* Use 1st 32-bits from rss_key to configure NTH10 SEED */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0,
+ rss_conf.rss_key[0] << 24 | rss_conf.rss_key[1] << 16 |
+ rss_conf.rss_key[2] << 8 | rss_conf.rss_key[3]);
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ toeplitz = true;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 1);
+ uint8_t empty_key = 0;
+
+ /* Toeplitz key (always 40B) must be encoded from little to big endian */
+ for (uint8_t i = 0; i <= (MAX_RSS_KEY_LEN - 8); i += 8) {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 |
+ rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 |
+ rss_conf.rss_key[i + 7]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 | rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 | rss_conf.rss_key[i + 7]);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 |
+ rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 |
+ rss_conf.rss_key[i + 3]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 | rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 | rss_conf.rss_key[i + 3]);
+ empty_key |= rss_conf.rss_key[i] | rss_conf.rss_key[i + 1] |
+ rss_conf.rss_key[i + 2] | rss_conf.rss_key[i + 3] |
+ rss_conf.rss_key[i + 4] | rss_conf.rss_key[i + 5] |
+ rss_conf.rss_key[i + 6] | rss_conf.rss_key[i + 7];
+ }
+
+ if (empty_key == 0) {
+ NT_LOG(ERR, FILTER,
+ "Toeplitz key must be configured. Key with all bytes set to zero is not allowed.");
+ return -1;
+ }
+
+ words[HSH_WORDS_W9].free = false;
+ NT_LOG(DBG, FILTER,
+ "Toeplitz hashing is enabled thus W9 and P_MASK cannot be used.");
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Unknown hashing function %d requested", rss_conf.algorithm);
+ return -1;
+ }
+
+ /* indication that some IPv6 flag is present */
+ bool ipv6 = fields & (NT_ETH_RSS_IPV6_MASK);
+ /* store proto mask for later use at IP and L4 checksum handling */
+ uint64_t l4_proto_mask = fields &
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX);
+
+ /* outermost headers are used by default, so innermost bit takes precedence if detected */
+ bool outer = (fields & RTE_ETH_RSS_LEVEL_INNERMOST) ? false : true;
+ unset_bits(&fields, RTE_ETH_RSS_LEVEL_MASK);
+
+ if (fields == 0) {
+ NT_LOG(ERR, FILTER, "RSS hash configuration 0x%" PRIX64 " is not valid.",
+ rss_conf.rss_hf);
+ return -1;
+ }
+
+ /* indication that IPv4 `protocol` or IPv6 `next header` fields shall be part of the hash
+ */
+ bool l4_proto_hash = false;
+
+ /*
+ * check if SRC_ONLY & DST_ONLY are used simultaneously;
+ * According to DPDK, we shall behave like none of these bits is set
+ */
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ /* L2 */
+ if (fields & (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 6, 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 96, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 6,
+ 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 96, toeplitz);
+ }
+
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY |
+ RTE_ETH_RSS_L2_DST_ONLY);
+ }
+
+ /*
+ * VLAN support of multiple VLAN headers,
+ * where S-VLAN is the first and C-VLAN the last VLAN header
+ */
+ if (fields & RTE_ETH_RSS_C_VLAN) {
+ /*
+ * use MPLS protocol offset, which points just after ethertype with relative
+ * offset -6 (i.e. 2 bytes
+ * of ethertype & size + 4 bytes of VLAN header field) to access last vlan header
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, -6,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ -6, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_C_VLAN);
+ }
+
+ if (fields & RTE_ETH_RSS_S_VLAN) {
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_FIRST_VLAN, 0, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_VLAN,
+ 0, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_S_VLAN);
+ }
+ /* L2 payload */
+ /* calculate hash of 128-bits of l2 payload; Use MPLS protocol offset to address the
+ * beginning of L2 payload even if MPLS header is not present
+ */
+ if (fields & RTE_ETH_RSS_L2_PAYLOAD) {
+ uint64_t outer_fields_enabled = 0;
+
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ 0, 128, toeplitz);
+ outer_fields_enabled = fields & RTE_ETH_RSS_GTPU;
+ }
+
+ /*
+ * L2 PAYLOAD hashing overrides all L3 & L4 RSS flags.
+ * Thus we can clear all remaining (supported)
+ * RSS flags...
+ */
+ unset_bits_and_log(&fields, NT_ETH_RSS_OFFLOAD_MASK);
+ /*
+ * ...but in case of INNER L2 PAYLOAD we must process
+ * "always outer" GTPU field if enabled
+ */
+ fields |= outer_fields_enabled;
+ }
+
+ /* L3 + L4 protocol number */
+ if (fields & RTE_ETH_RSS_IPV4_CHKSUM) {
+ /* only IPv4 checksum is supported by DPDK RTE_ETH_RSS_* types */
+ if (ipv6) {
+ NT_LOG(ERR, FILTER,
+ "RSS: IPv4 checksum requested with IPv6 header hashing!");
+ res = 1;
+
+ } else if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L3, 10,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L3,
+ 10, 16, toeplitz);
+ }
+
+ /*
+ * L3 checksum is made from whole L3 header, i.e. no need to process other
+ * L3 hashing flags
+ */
+ unset_bits_and_log(&fields, RTE_ETH_RSS_IPV4_CHKSUM | NT_ETH_RSS_IP_MASK);
+ }
+
+ if (fields & NT_ETH_RSS_IP_MASK) {
+ if (ipv6) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & (RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6)) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 32, toeplitz);
+ }
+ }
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0,
+ 1);
+
+ } else {
+ /* IPv4 */
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 32, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 16,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 64, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 32,
+ toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 16, 32,
+ toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 64,
+ toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & RTE_ETH_RSS_FRAG_IPV4) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 16, toeplitz);
+ }
+ }
+ }
+
+ /* check if L4 protocol type shall be part of hash */
+ if (l4_proto_mask)
+ l4_proto_hash = true;
+
+ unset_bits_and_log(&fields, NT_ETH_RSS_IP_MASK);
+ }
+
+ /* L4 */
+ if (fields & (RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 2, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 32, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 2,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 32, toeplitz);
+ }
+
+ l4_proto_hash = true;
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY);
+ }
+
+ /* IPv4 protocol / IPv6 next header fields */
+ if (l4_proto_hash) {
+ /* NOTE: HW_HSH_RCP_P_MASK is not supported for Toeplitz and thus one of SW0, SW4
+ * or W8 must be used to hash on `protocol` field of IPv4 or `next header` field of
+ * IPv6 header.
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 6, 8,
+ toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 9, 8,
+ toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 0);
+ }
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 6, 8, toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 9, 8, toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 1);
+ }
+ }
+
+ l4_proto_hash = false;
+ }
+
+ /*
+ * GTPU - for UPF use cases we always use TEID from outermost GTPU header
+ * even if other headers are innermost
+ */
+ if (fields & RTE_ETH_RSS_GTPU) {
+ NT_LOG(DBG, FILTER, "Set outer GTPU TEID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L4_PAYLOAD, 4, 32,
+ toeplitz);
+ unset_bits_and_log(&fields, RTE_ETH_RSS_GTPU);
+ }
+
+ /* Checksums */
+ /* only UDP, TCP and SCTP checksums are supported */
+ if (fields & RTE_ETH_RSS_L4_CHKSUM) {
+ switch (l4_proto_mask) {
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_UDP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 6, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 6, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_TCP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 16, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 16, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 8, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 8, 32,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+
+ /* none or unsupported protocol was chosen */
+ case 0:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing is supported only for UDP, TCP and SCTP protocols");
+ res = -1;
+ break;
+
+ /* multiple L4 protocols were selected */
+ default:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing can be enabled just for one of UDP, TCP or SCTP protocols");
+ res = -1;
+ break;
+ }
+ }
+
+ if (fields || res != 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", rss_conf.rss_hf) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration%s is not supported for hash func %s.",
+ rss_buffer,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration 0x%" PRIX64
+ " is not supported for hash func %s.",
+ rss_conf.rss_hf,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+ }
+
+ return -1;
+ }
+
+ return res;
+}
+
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -3007,6 +3853,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b87f8542ac..e623bb2352 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,4 +38,8 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 149c549112..1069be2f85 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -252,6 +252,10 @@ struct profile_inline_ops {
int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 32/73] net/ntnic: add TPE module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (30 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 31/73] net/ntnic: add hash API Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 33/73] net/ntnic: add FLM module Serhii Iliushyk
` (44 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The TX Packet Editor is a software abstraction module,
that keeps track of the handful of FPGA modules
that are used to edit packets in the TX pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 16 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 373 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 70 ++
.../profile_inline/flow_api_profile_inline.c | 127 ++-
5 files changed, 1342 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index cee148807a..e16dcd478f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -889,24 +889,40 @@ void hw_mod_tpe_free(struct flow_api_backend_s *be);
int hw_mod_tpe_reset(struct flow_api_backend_s *be);
int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value);
int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
enum debug_mode_e {
FLOW_BACKEND_DEBUG_MODE_NONE = 0x0000,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index 0d73b795d5..ba8f2d0dbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -169,6 +169,82 @@ int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpp_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpp_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpp_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPP_RCP_EXP:
+ GET_SET(be->tpe.v3.rpp_rcp[index].exp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* IFR_RCP
*/
@@ -203,6 +279,90 @@ int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ins_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ins_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.ins_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_ins_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_INS_RCP_DYN:
+ GET_SET(be->tpe.v3.ins_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_INS_RCP_OFS:
+ GET_SET(be->tpe.v3.ins_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_INS_RCP_LEN:
+ GET_SET(be->tpe.v3.ins_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ins_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RCP
*/
@@ -220,6 +380,102 @@ int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v3_rpl_v4_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RCP_DYN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_RPL_RCP_OFS:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_RPL_RCP_LEN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].len, value);
+ break;
+
+ case HW_TPE_RPL_RCP_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_RCP_EXT_PRIO:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ext_prio, value);
+ break;
+
+ case HW_TPE_RPL_RCP_ETH_TYPE_WR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].eth_type_wr, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_EXT
*/
@@ -237,6 +493,86 @@ int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_ext_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_ext_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_ext[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_ext_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value, be->tpe.nb_rpl_ext_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_EXT_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_ext[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_EXT_META_RPL_LEN:
+ GET_SET(be->tpe.v3.rpl_ext[index].meta_rpl_len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_ext_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RPL
*/
@@ -254,6 +590,89 @@ int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rpl_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rpl_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rpl[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_rpl_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value, be->tpe.nb_rpl_depth);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RPL_VALUE:
+ if (get)
+ memcpy(value, be->tpe.v3.rpl_rpl[index].value,
+ sizeof(uint32_t) * 4);
+
+ else
+ memcpy(be->tpe.v3.rpl_rpl[index].value, value,
+ sizeof(uint32_t) * 4);
+
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_tpe_rpl_rpl_mod(be, field, index, value, 0);
+}
+
/*
* CPY_RCP
*/
@@ -273,6 +692,96 @@ int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_cpy_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_cpy_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ const uint32_t cpy_size = be->tpe.nb_cpy_writers * be->tpe.nb_rcp_categories;
+
+ if (index >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.cpy_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_cpy_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value, cpy_size);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CPY_RCP_READER_SELECT:
+ GET_SET(be->tpe.v3.cpy_rcp[index].reader_select, value);
+ break;
+
+ case HW_TPE_CPY_RCP_DYN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_CPY_RCP_OFS:
+ GET_SET(be->tpe.v3.cpy_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_CPY_RCP_LEN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_cpy_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* HFU_RCP
*/
@@ -290,6 +799,166 @@ int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_hfu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_hfu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.hfu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_hfu_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_outer_l4_len, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_hfu_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* CSU_RCP
*/
@@ -306,3 +975,91 @@ int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_csu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+
+static int hw_mod_tpe_csu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.csu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_csu_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol4_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il4_cmd, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_csu_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 8b62ce11dd..ea7cc82d54 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -29,6 +29,17 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_tpe {
+ struct hw_db_inline_tpe_data data;
+ int ref;
+ } *tpe;
+
+ struct hw_db_inline_resource_db_tpe_ext {
+ struct hw_db_inline_tpe_ext_data data;
+ int replace_ram_idx;
+ int ref;
+ } *tpe_ext;
+
struct hw_db_inline_resource_db_hsh {
struct hw_db_inline_hsh_data data;
int ref;
@@ -37,6 +48,8 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_tpe;
+ uint32_t nb_tpe_ext;
uint32_t nb_hsh;
/* Items */
@@ -100,6 +113,22 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_tpe = ndev->be.tpe.nb_rcp_categories;
+ db->tpe = calloc(db->nb_tpe, sizeof(struct hw_db_inline_resource_db_tpe));
+
+ if (db->tpe == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_tpe_ext = ndev->be.tpe.nb_rpl_ext_categories;
+ db->tpe_ext = calloc(db->nb_tpe_ext, sizeof(struct hw_db_inline_resource_db_tpe_ext));
+
+ if (db->tpe_ext == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -153,6 +182,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->tpe);
+ free(db->tpe_ext);
free(db->hsh);
free(db->cat);
@@ -194,6 +225,15 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_TPE:
+ hw_db_inline_tpe_deref(ndev, db_handle, *(struct hw_db_tpe_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ hw_db_inline_tpe_ext_deref(ndev, db_handle,
+ *(struct hw_db_tpe_ext_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -239,6 +279,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_SLC_LR:
return &db->slc_lr[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_TPE:
+ return &db->tpe[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ return &db->tpe_ext[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -651,6 +697,333 @@ void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
}
}
+/******************************************************************************/
+/* TPE */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_compare(const struct hw_db_inline_tpe_data *data1,
+ const struct hw_db_inline_tpe_data *data2)
+{
+ for (int i = 0; i < 6; ++i)
+ if (data1->writer[i].en != data2->writer[i].en ||
+ data1->writer[i].reader_select != data2->writer[i].reader_select ||
+ data1->writer[i].dyn != data2->writer[i].dyn ||
+ data1->writer[i].ofs != data2->writer[i].ofs ||
+ data1->writer[i].len != data2->writer[i].len)
+ return 0;
+
+ return data1->insert_len == data2->insert_len && data1->new_outer == data2->new_outer &&
+ data1->calc_eth_type_from_inner_ip == data2->calc_eth_type_from_inner_ip &&
+ data1->ttl_en == data2->ttl_en && data1->ttl_dyn == data2->ttl_dyn &&
+ data1->ttl_ofs == data2->ttl_ofs && data1->len_a_en == data2->len_a_en &&
+ data1->len_a_pos_dyn == data2->len_a_pos_dyn &&
+ data1->len_a_pos_ofs == data2->len_a_pos_ofs &&
+ data1->len_a_add_dyn == data2->len_a_add_dyn &&
+ data1->len_a_add_ofs == data2->len_a_add_ofs &&
+ data1->len_a_sub_dyn == data2->len_a_sub_dyn &&
+ data1->len_b_en == data2->len_b_en &&
+ data1->len_b_pos_dyn == data2->len_b_pos_dyn &&
+ data1->len_b_pos_ofs == data2->len_b_pos_ofs &&
+ data1->len_b_add_dyn == data2->len_b_add_dyn &&
+ data1->len_b_add_ofs == data2->len_b_add_ofs &&
+ data1->len_b_sub_dyn == data2->len_b_sub_dyn &&
+ data1->len_c_en == data2->len_c_en &&
+ data1->len_c_pos_dyn == data2->len_c_pos_dyn &&
+ data1->len_c_pos_ofs == data2->len_c_pos_ofs &&
+ data1->len_c_add_dyn == data2->len_c_add_dyn &&
+ data1->len_c_add_ofs == data2->len_c_add_ofs &&
+ data1->len_c_sub_dyn == data2->len_c_sub_dyn;
+}
+
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE;
+
+ for (uint32_t i = 1; i < db->nb_tpe; ++i) {
+ int ref = db->tpe[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_compare(data, &db->tpe[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe[idx.ids].ref = 1;
+ memcpy(&db->tpe[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_data));
+
+ if (data->insert_len > 0) {
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_RPP_RCP_EXP, idx.ids, data->insert_len);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_RPL_PTR, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_EXT_PRIO, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_ETH_TYPE_WR, idx.ids,
+ data->calc_eth_type_from_inner_ip);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+ }
+
+ for (uint32_t i = 0; i < 6; ++i) {
+ if (data->writer[i].en) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i,
+ data->writer[i].reader_select);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, data->writer[i].dyn);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, data->writer[i].ofs);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, data->writer[i].len);
+
+ } else {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, 0);
+ }
+
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_WR, idx.ids, data->len_a_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN, idx.ids,
+ data->new_outer);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_DYN, idx.ids,
+ data->len_a_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_OFS, idx.ids,
+ data->len_a_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_DYN, idx.ids,
+ data->len_a_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_OFS, idx.ids,
+ data->len_a_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_SUB_DYN, idx.ids,
+ data->len_a_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_WR, idx.ids, data->len_b_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_DYN, idx.ids,
+ data->len_b_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_OFS, idx.ids,
+ data->len_b_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_DYN, idx.ids,
+ data->len_b_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_OFS, idx.ids,
+ data->len_b_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_SUB_DYN, idx.ids,
+ data->len_b_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_WR, idx.ids, data->len_c_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_DYN, idx.ids,
+ data->len_c_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_OFS, idx.ids,
+ data->len_c_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_DYN, idx.ids,
+ data->len_c_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_OFS, idx.ids,
+ data->len_c_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_SUB_DYN, idx.ids,
+ data->len_c_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_WR, idx.ids, data->ttl_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_DYN, idx.ids, data->ttl_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_OFS, idx.ids, data->ttl_ofs);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe[idx.ids].ref -= 1;
+
+ if (db->tpe[idx.ids].ref <= 0) {
+ for (uint32_t i = 0; i < 6; ++i) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_PRESET_ALL,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->tpe[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_data));
+ db->tpe[idx.ids].ref = 0;
+ }
+}
+
+/******************************************************************************/
+/* TPE_EXT */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_ext_compare(const struct hw_db_inline_tpe_ext_data *data1,
+ const struct hw_db_inline_tpe_ext_data *data2)
+{
+ return data1->size == data2->size &&
+ memcmp(data1->hdr8, data2->hdr8, HW_DB_INLINE_MAX_ENCAP_SIZE) == 0;
+}
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_ext_idx idx = { .raw = 0 };
+ int rpl_rpl_length = ((int)data->size + 15) / 16;
+ int found = 0, rpl_rpl_index = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE_EXT;
+
+ if (data->size > HW_DB_INLINE_MAX_ENCAP_SIZE) {
+ idx.error = 1;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_tpe_ext; ++i) {
+ int ref = db->tpe_ext[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_ext_compare(data, &db->tpe_ext[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ext_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ rpl_rpl_index = flow_nic_alloc_resource_config(ndev, RES_TPE_RPL, rpl_rpl_length, 1);
+
+ if (rpl_rpl_index < 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe_ext[idx.ids].ref = 1;
+ db->tpe_ext[idx.ids].replace_ram_idx = rpl_rpl_index;
+ memcpy(&db->tpe_ext[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_ext_data));
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_RPL_PTR, idx.ids, rpl_rpl_index);
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_META_RPL_LEN, idx.ids, data->size);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_data[4];
+ memcpy(rpl_data, data->hdr32 + i * 4, sizeof(rpl_data));
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_data);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe_ext[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe_ext[idx.ids].ref -= 1;
+
+ if (db->tpe_ext[idx.ids].ref <= 0) {
+ const int rpl_rpl_length = ((int)db->tpe_ext[idx.ids].data.size + 15) / 16;
+ const int rpl_rpl_index = db->tpe_ext[idx.ids].replace_ram_idx;
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_zero[] = { 0, 0, 0, 0 };
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_zero);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, rpl_rpl_index + i);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ memset(&db->tpe_ext[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_ext_data));
+ db->tpe_ext[idx.ids].ref = 0;
+ }
+}
+
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c97bdef1b7..18d959307e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -52,6 +52,60 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_inline_tpe_data {
+ uint32_t insert_len : 16;
+ uint32_t new_outer : 1;
+ uint32_t calc_eth_type_from_inner_ip : 1;
+ uint32_t ttl_en : 1;
+ uint32_t ttl_dyn : 5;
+ uint32_t ttl_ofs : 8;
+
+ struct {
+ uint32_t en : 1;
+ uint32_t reader_select : 3;
+ uint32_t dyn : 5;
+ uint32_t ofs : 14;
+ uint32_t len : 5;
+ uint32_t padding : 4;
+ } writer[6];
+
+ uint32_t len_a_en : 1;
+ uint32_t len_a_pos_dyn : 5;
+ uint32_t len_a_pos_ofs : 8;
+ uint32_t len_a_add_dyn : 5;
+ uint32_t len_a_add_ofs : 8;
+ uint32_t len_a_sub_dyn : 5;
+
+ uint32_t len_b_en : 1;
+ uint32_t len_b_pos_dyn : 5;
+ uint32_t len_b_pos_ofs : 8;
+ uint32_t len_b_add_dyn : 5;
+ uint32_t len_b_add_ofs : 8;
+ uint32_t len_b_sub_dyn : 5;
+
+ uint32_t len_c_en : 1;
+ uint32_t len_c_pos_dyn : 5;
+ uint32_t len_c_pos_ofs : 8;
+ uint32_t len_c_add_dyn : 5;
+ uint32_t len_c_add_ofs : 8;
+ uint32_t len_c_sub_dyn : 5;
+};
+
+struct hw_db_inline_tpe_ext_data {
+ uint32_t size;
+ union {
+ uint8_t hdr8[HW_DB_INLINE_MAX_ENCAP_SIZE];
+ uint32_t hdr32[(HW_DB_INLINE_MAX_ENCAP_SIZE + 3) / 4];
+ };
+};
+
+struct hw_db_tpe_idx {
+ HW_DB_IDX;
+};
+struct hw_db_tpe_ext_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -70,6 +124,9 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_TPE,
+ HW_DB_IDX_TYPE_TPE_EXT,
+
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
@@ -138,6 +195,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
};
@@ -181,6 +239,18 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data);
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data);
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+
struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_hsh_data *data);
void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2d795e2c7f..2fce706ce1 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -17,6 +17,8 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
@@ -2426,6 +2428,92 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
}
}
+ /* Setup TPE EXT */
+ if (fd->tun_hdr.len > 0) {
+ assert(fd->tun_hdr.len <= HW_DB_INLINE_MAX_ENCAP_SIZE);
+
+ struct hw_db_inline_tpe_ext_data tpe_ext_data = {
+ .size = fd->tun_hdr.len,
+ };
+
+ memset(tpe_ext_data.hdr8, 0x0, HW_DB_INLINE_MAX_ENCAP_SIZE);
+ memcpy(tpe_ext_data.hdr8, fd->tun_hdr.d.hdr8, (fd->tun_hdr.len + 15) & ~15);
+
+ struct hw_db_tpe_ext_idx tpe_ext_idx =
+ hw_db_inline_tpe_ext_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_ext_data);
+ local_idxs[(*local_idx_counter)++] = tpe_ext_idx.raw;
+
+ if (tpe_ext_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE EXT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_rpl_ext_ptr)
+ *flm_rpl_ext_ptr = tpe_ext_idx.ids;
+ }
+
+ /* Setup TPE */
+ assert(fd->modify_field_count <= 6);
+
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip =
+ !fd->tun_hdr.new_outer && fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ tpe_data.writer[i].en = 1;
+ tpe_data.writer[i].reader_select = fd->modify_field[i].select;
+ tpe_data.writer[i].dyn = fd->modify_field[i].dyn;
+ tpe_data.writer[i].ofs = fd->modify_field[i].ofs;
+ tpe_data.writer[i].len = fd->modify_field[i].len;
+ }
+
+ if (fd->tun_hdr.new_outer) {
+ const int fcs_length = 4;
+
+ /* L4 length */
+ tpe_data.len_a_en = 1;
+ tpe_data.len_a_pos_dyn = DYN_L4;
+ tpe_data.len_a_pos_ofs = 4;
+ tpe_data.len_a_add_dyn = 18;
+ tpe_data.len_a_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_a_sub_dyn = DYN_L4;
+
+ /* L3 length */
+ tpe_data.len_b_en = 1;
+ tpe_data.len_b_pos_dyn = DYN_L3;
+ tpe_data.len_b_pos_ofs = fd->tun_hdr.ip_version == 4 ? 2 : 4;
+ tpe_data.len_b_add_dyn = 18;
+ tpe_data.len_b_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_b_sub_dyn = DYN_L3;
+
+ /* GTP length */
+ tpe_data.len_c_en = 1;
+ tpe_data.len_c_pos_dyn = DYN_L4_PAYLOAD;
+ tpe_data.len_c_pos_ofs = 2;
+ tpe_data.len_c_add_dyn = 18;
+ tpe_data.len_c_add_ofs = (uint32_t)(-8 - fcs_length) & 0xff;
+ tpe_data.len_c_sub_dyn = DYN_L4_PAYLOAD;
+ }
+
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle, &tpe_data);
+
+ local_idxs[(*local_idx_counter)++] = tpe_idx.raw;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
return 0;
}
@@ -2552,6 +2640,30 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup TPE */
+ if (fd->ttl_sub_enable) {
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip = !fd->tun_hdr.new_outer &&
+ fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_data);
+ fh->db_idxs[fh->db_idx_counter++] = tpe_idx.raw;
+ action_set_data.tpe = tpe_idx;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
}
/* Setup CAT */
@@ -2860,6 +2972,16 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (!ndev->flow_mgnt_prepared) {
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* KM Flow Type 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_mark_resource_used(ndev, RES_KM_CATEGORY, 0);
+
+ /* Reserved FLM Flow Types */
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_MISS_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_UNHANDLED_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_RCP, 0);
/* COT is locked to CFN. Don't set color for CFN 0 */
hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
@@ -2885,8 +3007,11 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
- /* SLC LR index 0 is reserved */
+ /* SLC LR & TPE index 0 were reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_EXT, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RPL, 0);
/* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 33/73] net/ntnic: add FLM module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (31 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 32/73] net/ntnic: add TPE module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
` (43 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 42 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 190 +++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 257 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 33 +++
.../profile_inline/flow_api_profile_inline.c | 222 ++++++++++++++-
.../flow_api_profile_inline_config.h | 129 +++++++++
drivers/net/ntnic/ntutil/nt_util.h | 8 +
8 files changed, 1109 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index e16dcd478f..de662c4ed1 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -367,6 +367,18 @@ int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
/* KCE/KCS/FTE KM */
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -374,6 +386,18 @@ int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
enum km_flm_if_select_e if_num, int index, uint32_t *value);
/* KCE/KCS/FTE FLM */
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -384,10 +408,14 @@ int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
@@ -638,7 +666,21 @@ int hw_mod_flm_reset(struct flow_api_backend_s *be);
int hw_mod_flm_control_flush(struct flow_api_backend_s *be);
int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+int hw_mod_flm_status_update(struct flow_api_backend_s *be);
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index 9164ec1ae0..985c821312 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -902,6 +902,95 @@ static int hw_mod_cat_kce_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kce_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kce_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v18.kce[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v21.kce[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* KCS
*/
@@ -925,6 +1014,95 @@ static int hw_mod_cat_kcs_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kcs_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kcs_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v18.kcs[index].category, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v21.kcs[index].category[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* FTE
*/
@@ -1094,6 +1272,12 @@ int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cte_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -1154,6 +1338,12 @@ int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cts_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 8c1f3f2d96..f5eaea7c4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -293,11 +293,268 @@ int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, u
return hw_mod_flm_control_mod(be, field, &value, 0);
}
+int hw_mod_flm_status_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_status_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_status_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STATUS_CALIB_SUCCESS:
+ GET_SET(be->flm.v25.status->calib_success, value);
+ break;
+
+ case HW_FLM_STATUS_CALIB_FAIL:
+ GET_SET(be->flm.v25.status->calib_fail, value);
+ break;
+
+ case HW_FLM_STATUS_INITDONE:
+ GET_SET(be->flm.v25.status->initdone, value);
+ break;
+
+ case HW_FLM_STATUS_IDLE:
+ GET_SET(be->flm.v25.status->idle, value);
+ break;
+
+ case HW_FLM_STATUS_CRITICAL:
+ GET_SET(be->flm.v25.status->critical, value);
+ break;
+
+ case HW_FLM_STATUS_PANIC:
+ GET_SET(be->flm.v25.status->panic, value);
+ break;
+
+ case HW_FLM_STATUS_CRCERR:
+ GET_SET(be->flm.v25.status->crcerr, value);
+ break;
+
+ case HW_FLM_STATUS_EFT_BP:
+ GET_SET(be->flm.v25.status->eft_bp, value);
+ break;
+
+ case HW_FLM_STATUS_CACHE_BUFFER_CRITICAL:
+ GET_SET(be->flm.v25.status->cache_buf_critical, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_status_mod(be, field, value, 1);
+}
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be)
{
return be->iface->flm_scan_flush(be->be_dev, &be->flm);
}
+static int hw_mod_flm_scan_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCAN_I:
+ GET_SET(be->flm.v25.scan->i, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_scan_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_load_bin_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_load_bin_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_LOAD_BIN:
+ GET_SET(be->flm.v25.load_bin->bin, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_load_bin_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_prio_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_prio_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PRIO_LIMIT0:
+ GET_SET(be->flm.v25.prio->limit0, value);
+ break;
+
+ case HW_FLM_PRIO_FT0:
+ GET_SET(be->flm.v25.prio->ft0, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT1:
+ GET_SET(be->flm.v25.prio->limit1, value);
+ break;
+
+ case HW_FLM_PRIO_FT1:
+ GET_SET(be->flm.v25.prio->ft1, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT2:
+ GET_SET(be->flm.v25.prio->limit2, value);
+ break;
+
+ case HW_FLM_PRIO_FT2:
+ GET_SET(be->flm.v25.prio->ft2, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT3:
+ GET_SET(be->flm.v25.prio->limit3, value);
+ break;
+
+ case HW_FLM_PRIO_FT3:
+ GET_SET(be->flm.v25.prio->ft3, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_prio_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count)
+{
+ if (count == ALL_ENTRIES)
+ count = be->flm.nb_pst_profiles;
+
+ if ((unsigned int)(start_idx + count) > be->flm.nb_pst_profiles) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ return be->iface->flm_pst_flush(be->be_dev, &be->flm, start_idx, count);
+}
+
+static int hw_mod_flm_pst_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.pst[index], (uint8_t)*value,
+ sizeof(struct flm_v25_pst_s));
+ break;
+
+ case HW_FLM_PST_BP:
+ GET_SET(be->flm.v25.pst[index].bp, value);
+ break;
+
+ case HW_FLM_PST_PP:
+ GET_SET(be->flm.v25.pst[index].pp, value);
+ break;
+
+ case HW_FLM_PST_TP:
+ GET_SET(be->flm.v25.pst[index].tp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_pst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index ea7cc82d54..e7bc9ec4b8 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -8,6 +8,14 @@
#include "flow_api_hw_db_inline.h"
+#define HW_DB_FT_LOOKUP_KEY_A 0
+
+#define HW_DB_FT_TYPE_KM 1
+#define HW_DB_FT_LOOKUP_KEY_A 0
+#define HW_DB_FT_LOOKUP_KEY_C 2
+
+#define HW_DB_FT_TYPE_FLM 0
+#define HW_DB_FT_TYPE_KM 1
/******************************************************************************/
/* Handle */
/******************************************************************************/
@@ -58,6 +66,23 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_resource_db_flm_ft {
+ struct hw_db_inline_flm_ft_data data;
+ struct hw_db_flm_ft idx;
+ int ref;
+ } *ft;
+
+ struct hw_db_inline_resource_db_flm_match_set {
+ struct hw_db_match_set_idx idx;
+ int ref;
+ } *match_set;
+
+ struct hw_db_inline_resource_db_flm_cfn_map {
+ int cfn_idx;
+ } *cfn_map;
+ } *flm;
+
struct hw_db_inline_resource_db_km_rcp {
struct hw_db_inline_km_rcp_data data;
int ref;
@@ -69,6 +94,7 @@ struct hw_db_inline_resource_db {
} *km;
uint32_t nb_cat;
+ uint32_t nb_flm_ft;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -172,6 +198,13 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
*db_handle = db;
+
+ /* Preset data */
+
+ db->flm[0].ft[1].idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ db->flm[0].ft[1].idx.id1 = 1;
+ db->flm[0].ft[1].ref = 1;
+
return 0;
}
@@ -234,6 +267,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ hw_db_inline_flm_ft_deref(ndev, db_handle,
+ *(struct hw_db_flm_ft *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -285,6 +323,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -306,6 +347,61 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
/* Filter */
/******************************************************************************/
+/*
+ * lookup refers to key A/B/C/D, and can have values 0, 1, 2, and 3.
+ */
+static void hw_db_set_ft(struct flow_nic_dev *ndev, int type, int cfn_index, int lookup,
+ int flow_type, int enable)
+{
+ (void)type;
+ (void)enable;
+
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index = (8 * flow_type + cfn_index / cat_funcs) * max_lookups + lookup;
+ int fte_field = cfn_index % cat_funcs;
+
+ uint32_t current_bm = 0;
+ uint32_t fte_field_bm = 1 << fte_field;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t final_bm = enable ? (fte_field_bm | current_bm) : (~fte_field_bm & current_bm);
+
+ if (current_bm != final_bm) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
/*
* Setup a filter to match:
* All packets in CFN checks
@@ -347,6 +443,17 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
return -1;
+ /* KM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match FT=ft_argument for look-up C */
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft, 1);
+
/* Make all CFN checks TRUE */
if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
return -1;
@@ -1251,6 +1358,133 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+/******************************************************************************/
+/* FLM FT */
+/******************************************************************************/
+
+static int hw_db_inline_flm_ft_compare(const struct hw_db_inline_flm_ft_data *data1,
+ const struct hw_db_inline_flm_ft_data *data2)
+{
+ return data1->is_group_zero == data2->is_group_zero && data1->jump == data2->jump &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ if (data->is_group_zero) {
+ idx.error = 1;
+ return idx;
+ }
+
+ if (flm_rcp->ft[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->group];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ /* RCP 0 always uses FT 1; i.e. use unhandled FT for disabled RCP */
+ if (data->group == 0) {
+ idx.id1 = 1;
+ return idx;
+ }
+
+ if (data->is_group_zero) {
+ idx.id3 = 1;
+ return idx;
+ }
+
+ /* FLM_FT records 0, 1 and last (15) are reserved */
+ /* NOTE: RES_FLM_FLOW_TYPE resource is global and it cannot be used in _add() and _deref()
+ * to track usage of FLM_FT recipes which are group specific.
+ */
+ for (uint32_t i = 2; i < db->nb_flm_ft; ++i) {
+ if (!found && flm_rcp->ft[i].ref <= 0 &&
+ !flow_nic_is_resource_used(ndev, RES_FLM_FLOW_TYPE, i)) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (flm_rcp->ft[i].ref > 0 &&
+ hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error && idx.id3 == 0)
+ db->flm[idx.id2].ft[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+
+ if (idx.error || idx.id2 == 0 || idx.id3 > 0)
+ return;
+
+ flm_rcp = &db->flm[idx.id2];
+
+ flm_rcp->ft[idx.id1].ref -= 1;
+
+ if (flm_rcp->ft[idx.id1].ref > 0)
+ return;
+
+ flm_rcp->ft[idx.id1].ref = 0;
+ memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
+}
/******************************************************************************/
/* HSH */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 18d959307e..a520ae1769 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_match_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_action_set_idx {
HW_DB_IDX;
};
@@ -106,6 +110,13 @@ struct hw_db_tpe_ext_idx {
HW_DB_IDX;
};
+struct hw_db_flm_idx {
+ HW_DB_IDX;
+};
+struct hw_db_flm_ft {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -128,6 +139,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE_EXT,
HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -211,6 +223,17 @@ struct hw_db_inline_km_ft_data {
struct hw_db_action_set_idx action_set;
};
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -277,6 +300,16 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx);
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_ft idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2fce706ce1..acbf54c485 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -11,6 +11,7 @@
#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
#include "stream_binary_flow_api.h"
@@ -46,6 +47,128 @@ static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
return -1;
}
+/*
+ * Flow Matcher functionality
+ */
+
+static int flm_sdram_calibrate(struct flow_nic_dev *ndev)
+{
+ int success = 0;
+ uint32_t fail_value = 0;
+ uint32_t value = 0;
+
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_PRESET_ALL, 0x0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_SPLIT_SDRAM_USAGE, 0x10);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for ddr4 calibration/init done */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_SUCCESS, &value);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_FAIL, &fail_value);
+
+ if (value & 0x80000000) {
+ success = 1;
+ break;
+ }
+
+ if (fail_value != 0)
+ break;
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - SDRAM calibration failed");
+ NT_LOG(ERR, FILTER,
+ "Calibration status: success 0x%08" PRIx32 " - fail 0x%08" PRIx32,
+ value, fail_value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
+{
+ int success = 0;
+
+ /*
+ * Make sure no lookup is performed during init, i.e.
+ * disable every category and disable FLM
+ */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for FLM to enter Idle state */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_IDLE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - Never idle");
+ return -1;
+ }
+
+ success = 0;
+
+ /* Start SDRAM initialization */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x1);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_INITDONE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER,
+ "FLM initialization failed - SDRAM initialization incomplete");
+ return -1;
+ }
+
+ /* Set the INIT value back to zero to clear the bit in the SW register cache */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Enable FLM */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, enable);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ int nb_rpp_per_ps = ndev->be.flm.nb_rpp_clock_in_ps;
+ int nb_load_aps_max = ndev->be.flm.nb_load_aps_max;
+ uint32_t scan_i_value = 0;
+
+ if (NTNIC_SCANNER_LOAD > 0) {
+ scan_i_value = (1 / (nb_rpp_per_ps * 0.000000000001)) /
+ (nb_load_aps_max * NTNIC_SCANNER_LOAD);
+ }
+
+ hw_mod_flm_scan_set(&ndev->be, HW_FLM_SCAN_I, scan_i_value);
+ hw_mod_flm_scan_flush(&ndev->be);
+
+ return 0;
+}
+
+
+
struct flm_flow_key_def_s {
union {
struct {
@@ -2364,9 +2487,6 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub,
struct rte_flow_error *error)
{
- (void)group;
- (void)flm_rpl_ext_ptr;
- (void)flm_ft;
(void)flm_scrub;
const bool empty_pattern = fd_has_empty_pattern(fd);
@@ -2514,6 +2634,25 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 0,
+ .group = group,
+ };
+ struct hw_db_flm_ft flm_ft_idx = empty_pattern
+ ? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
+ : hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ local_idxs[(*local_idx_counter)++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_ft)
+ *flm_ft = flm_ft_idx.id1;
+
return 0;
}
@@ -2528,9 +2667,6 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
(void)packet_mask;
(void)key_def;
(void)forced_vlan_vid;
- (void)num_dest_port;
- (void)num_queues;
-
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
fh->type = FLOW_HANDLE_TYPE_FLOW;
@@ -2821,6 +2957,21 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 1,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ };
+ struct hw_db_flm_ft flm_ft_idx =
+ hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -3041,6 +3192,63 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
NT_VIOLATING_MBR_QSL) < 0)
goto err_exit0;
+ /* FLM */
+ if (flm_sdram_calibrate(ndev) < 0)
+ goto err_exit0;
+
+ if (flm_sdram_reset(ndev, 1) < 0)
+ goto err_exit0;
+
+ /* Learn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LDS, 0);
+ /* Learn fail status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LFS, 1);
+ /* Learn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LIS, 1);
+ /* Unlearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UDS, 0);
+ /* Unlearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UIS, 0);
+ /* Relearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RDS, 0);
+ /* Relearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RIS, 0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RBL, 4);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Set the sliding windows size for flm load */
+ uint32_t bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) /
+ (32ULL * ndev->be.flm.nb_rpp_clock_in_ps)) -
+ 1ULL);
+ hw_mod_flm_load_bin_set(&ndev->be, HW_FLM_LOAD_BIN, bin);
+ hw_mod_flm_load_bin_flush(&ndev->be);
+
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT0,
+ 0); /* Drop at 100% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT0, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT1,
+ 14); /* Drop at 87,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT1, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT2,
+ 10); /* Drop at 62,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT2, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT3,
+ 6); /* Drop at 37,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT3, 1);
+ hw_mod_flm_prio_flush(&ndev->be);
+
+ /* TODO How to set and use these limits */
+ for (uint32_t i = 0; i < ndev->be.flm.nb_pst_profiles; ++i) {
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_BP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_PP, i,
+ NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_TP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT);
+ }
+
+ hw_mod_flm_pst_flush(&ndev->be, 0, ALL_ENTRIES);
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -3069,6 +3277,8 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
#endif
if (ndev->flow_mgnt_prepared) {
+ flm_sdram_reset(ndev, 0);
+
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
new file mode 100644
index 0000000000..9e454e4c0f
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -0,0 +1,129 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
+#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+
+/*
+ * Per port configuration for IPv4 fragmentation and DF flag handling
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV4_FRAGMENTATION | IPV4_DF_ACTION || Exceeding MTU | DF flag || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - | - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_DROP || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_FORWARD || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV4_DF_ACTION IPV4_DF_DROP
+
+#define PORT_1_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV4_DF_ACTION IPV4_DF_DROP
+
+
+/*
+ * Per port configuration for IPv6 fragmentation
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV6_FRAGMENTATION | IPV6_ACTION || Exceeding MTU || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DROP || no || Forward ||
+ * || | || yes || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | FRAGMENT || no || Forward ||
+ * || | || yes || Fragment ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV6_ACTION IPV6_DROP
+
+#define PORT_1_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV6_ACTION IPV6_DROP
+
+
+/*
+ * Statistics are generated each time the byte counter crosses a limit.
+ * If BYTE_LIMIT is zero then the byte counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_LIMIT + 15) bytes
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(8 + 15) = 2^23 ~~ 8MB
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT 8
+
+/*
+ * Statistics are generated each time the packet counter crosses a limit.
+ * If PKT_LIMIT is zero then the packet counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(PKT_LIMIT + 11) pkts
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(5 + 11) = 2^16 pkts ~~ 64K pkts
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT 5
+
+/*
+ * Statistics are generated each time flow time (measured in ns) crosses a
+ * limit.
+ * If BYTE_TIMEOUT is zero then the flow time does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_TIMEOUT + 15) ns
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(23 + 15) = 2^38 ns ~~ 275 sec
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT 23
+
+/*
+ * This define sets the percentage of the full processing capacity
+ * being reserved for scan operations. The scanner is responsible
+ * for detecting aged out flows and meters with statistics timeout.
+ *
+ * A high scanner load percentage will make this detection more precise
+ * but will also give lower packet processing capacity.
+ *
+ * The percentage is given as a decimal number, e.g. 0.01 for 1%, which is the recommended value.
+ */
+#define NTNIC_SCANNER_LOAD 0.01
+
+/*
+ * This define sets the timeout resolution of aged flow scanner (scrubber).
+ *
+ * The timeout resolution feature is provided in order to reduce the number of
+ * write-back operations for flows without attached meter. If the resolution
+ * is disabled (set to 0) and flow timeout is enabled via age action, then a write-back
+ * occurs every the flow is evicted from the flow cache, essentially causing the
+ * lookup performance to drop to that of a flow with meter. By setting the timeout
+ * resolution (>0), write-back for flows happens only when the difference between
+ * the last recorded time for the flow and the current time exceeds the chosen resolution.
+ *
+ * The parameter value is a power of 2 in units of 2^28 nanoseconds. It means that value 8 sets
+ * the timeout resolution to: 2^8 * 2^28 / 1e9 = 68,7 seconds
+ *
+ * NOTE: This parameter has a significant impact on flow lookup performance, especially
+ * if full scanner timeout resolution (=0) is configured.
+ */
+#define NTNIC_SCANNER_TIMEOUT_RESOLUTION 8
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 71ecd6c68c..a482fb43ad 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -16,6 +16,14 @@
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
+/*
+ * Windows size in seconds for measuring FLM load
+ * and Port load.
+ * The windows size must max be 3 min in order to
+ * prevent overflow.
+ */
+#define FLM_LOAD_WINDOWS_SIZE 2ULL
+
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
#define PCIIDENT_TO_BUSNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 8) & 0xFFU))
#define PCIIDENT_TO_DEVNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 3) & 0x1FU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 34/73] net/ntnic: add flm rcp module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (32 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 33/73] net/ntnic: add FLM module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
` (42 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 133 ++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 +++++++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 20 ++
.../profile_inline/flow_api_profile_inline.c | 40 +++-
5 files changed, 388 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index de662c4ed1..13722c30a9 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -683,6 +683,10 @@ int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value);
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f5eaea7c4e..0a7e90c04f 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -579,3 +579,136 @@ int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int cou
}
return be->iface->flm_scrub_flush(be->be_dev, &be->flm, start_idx, count);
}
+
+static int hw_mod_flm_rcp_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.rcp[index], (uint8_t)*value,
+ sizeof(struct flm_v25_rcp_s));
+ break;
+
+ case HW_FLM_RCP_LOOKUP:
+ GET_SET(be->flm.v25.rcp[index].lookup, value);
+ break;
+
+ case HW_FLM_RCP_QW0_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW0_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_FLM_RCP_QW0_SEL:
+ GET_SET(be->flm.v25.rcp[index].qw0_sel, value);
+ break;
+
+ case HW_FLM_RCP_QW4_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW4_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw8_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW8_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw8_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_SEL:
+ GET_SET(be->flm.v25.rcp[index].sw8_sel, value);
+ break;
+
+ case HW_FLM_RCP_SW9_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw9_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW9_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw9_ofs, value);
+ break;
+
+ case HW_FLM_RCP_MASK:
+ if (get) {
+ memcpy(value, be->flm.v25.rcp[index].mask,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+
+ } else {
+ memcpy(be->flm.v25.rcp[index].mask, value,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+ }
+
+ break;
+
+ case HW_FLM_RCP_KID:
+ GET_SET(be->flm.v25.rcp[index].kid, value);
+ break;
+
+ case HW_FLM_RCP_OPN:
+ GET_SET(be->flm.v25.rcp[index].opn, value);
+ break;
+
+ case HW_FLM_RCP_IPN:
+ GET_SET(be->flm.v25.rcp[index].ipn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_DYN:
+ GET_SET(be->flm.v25.rcp[index].byt_dyn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_OFS:
+ GET_SET(be->flm.v25.rcp[index].byt_ofs, value);
+ break;
+
+ case HW_FLM_RCP_TXPLM:
+ GET_SET(be->flm.v25.rcp[index].txplm, value);
+ break;
+
+ case HW_FLM_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->flm.v25.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value)
+{
+ if (field != HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, value, 0);
+}
+
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ if (field == HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index e7bc9ec4b8..089f8c8174 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -67,6 +67,9 @@ struct hw_db_inline_resource_db {
} *cat;
struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_flm_rcp_data data;
+ int ref;
+
struct hw_db_inline_resource_db_flm_ft {
struct hw_db_inline_flm_ft_data data;
struct hw_db_flm_ft idx;
@@ -95,6 +98,7 @@ struct hw_db_inline_resource_db {
uint32_t nb_cat;
uint32_t nb_flm_ft;
+ uint32_t nb_flm_rcp;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -163,6 +167,42 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+
+ db->nb_flm_ft = ndev->be.cat.nb_flow_types;
+ db->nb_flm_rcp = ndev->be.flm.nb_categories;
+ db->flm = calloc(db->nb_flm_rcp, sizeof(struct hw_db_inline_resource_db_flm_rcp));
+
+ if (db->flm == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ db->flm[i].ft =
+ calloc(db->nb_flm_ft, sizeof(struct hw_db_inline_resource_db_flm_ft));
+
+ if (db->flm[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].match_set =
+ calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_flm_match_set));
+
+ if (db->flm[i].match_set == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].cfn_map = calloc(db->nb_cat * db->nb_flm_ft,
+ sizeof(struct hw_db_inline_resource_db_flm_cfn_map));
+
+ if (db->flm[i].cfn_map == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
db->nb_km_ft = ndev->be.cat.nb_flow_types;
db->nb_km_rcp = ndev->be.km.nb_categories;
db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
@@ -221,6 +261,16 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cat);
+ if (db->flm) {
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ free(db->flm[i].ft);
+ free(db->flm[i].match_set);
+ free(db->flm[i].cfn_map);
+ }
+
+ free(db->flm);
+ }
+
if (db->km) {
for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
free(db->km[i].ft);
@@ -267,6 +317,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ hw_db_inline_flm_deref(ndev, db_handle, *(struct hw_db_flm_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_FLM_FT:
hw_db_inline_flm_ft_deref(ndev, db_handle,
*(struct hw_db_flm_ft *)&idxs[i]);
@@ -323,6 +377,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ return &db->flm[idxs[i].id1].data;
+
case HW_DB_IDX_TYPE_FLM_FT:
return NULL; /* FTs can't be easily looked up */
@@ -480,6 +537,20 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
return 0;
}
+static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int flm_rcp)
+{
+ uint32_t flm_mask[10];
+ memset(flm_mask, 0xff, sizeof(flm_mask));
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, flm_rcp, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, flm_rcp, 1);
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, flm_rcp, flm_mask);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, flm_rcp, flm_rcp + 2);
+
+ hw_mod_flm_rcp_flush(&ndev->be, flm_rcp, 1);
+}
+
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1267,10 +1338,17 @@ void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_d
void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
{
(void)ndev;
- (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
if (idx.error)
return;
+
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0, sizeof(struct hw_db_inline_km_rcp_data));
+ db->flm[idx.id1].ref = 0;
+ }
}
/******************************************************************************/
@@ -1358,6 +1436,121 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* FLM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_flm_compare(const struct hw_db_inline_flm_rcp_data *data1,
+ const struct hw_db_inline_flm_rcp_data *data2)
+{
+ if (data1->qw0_dyn != data2->qw0_dyn || data1->qw0_ofs != data2->qw0_ofs ||
+ data1->qw4_dyn != data2->qw4_dyn || data1->qw4_ofs != data2->qw4_ofs ||
+ data1->sw8_dyn != data2->sw8_dyn || data1->sw8_ofs != data2->sw8_ofs ||
+ data1->sw9_dyn != data2->sw9_dyn || data1->sw9_ofs != data2->sw9_ofs ||
+ data1->outer_prot != data2->outer_prot || data1->inner_prot != data2->inner_prot) {
+ return 0;
+ }
+
+ for (int i = 0; i < 10; ++i)
+ if (data1->mask[i] != data2->mask[i])
+ return 0;
+
+ return 1;
+}
+
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_idx idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_RCP;
+ idx.id1 = group;
+
+ if (group == 0)
+ return idx;
+
+ if (db->flm[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_compare(data, &db->flm[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ref(ndev, db, idx);
+ return idx;
+ }
+
+ db->flm[idx.id1].ref = 1;
+ memcpy(&db->flm[idx.id1].data, data, sizeof(struct hw_db_inline_flm_rcp_data));
+
+ {
+ uint32_t flm_mask[10] = {
+ data->mask[0], /* SW9 */
+ data->mask[1], /* SW8 */
+ data->mask[5], data->mask[4], data->mask[3], data->mask[2], /* QW4 */
+ data->mask[9], data->mask[8], data->mask[7], data->mask[6], /* QW0 */
+ };
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, idx.id1, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, idx.id1, 1);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_DYN, idx.id1, data->qw0_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_OFS, idx.id1, data->qw0_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_DYN, idx.id1, data->qw4_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_OFS, idx.id1, data->qw4_ofs);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_DYN, idx.id1, data->sw8_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_OFS, idx.id1, data->sw8_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_DYN, idx.id1, data->sw9_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_OFS, idx.id1, data->sw9_ofs);
+
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, idx.id1, flm_mask);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, idx.id1, idx.id1 + 2);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_OPN, idx.id1, data->outer_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_IPN, idx.id1, data->inner_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_DYN, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_OFS, idx.id1, -20);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_TXPLM, idx.id1, UINT32_MAX);
+
+ hw_mod_flm_rcp_flush(&ndev->be, idx.id1, 1);
+ }
+
+ return idx;
+}
+
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->flm[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ if (idx.id1 > 0) {
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_flm_rcp_data));
+ db->flm[idx.id1].ref = 0;
+
+ hw_db_inline_setup_default_flm_rcp(ndev, idx.id1);
+ }
+ }
+}
+
/******************************************************************************/
/* FLM FT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a520ae1769..9820225ffa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -138,6 +138,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE,
HW_DB_IDX_TYPE_TPE_EXT,
+ HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
@@ -165,6 +166,22 @@ struct hw_db_inline_cat_data {
uint8_t ip_prot_tunnel;
};
+struct hw_db_inline_flm_rcp_data {
+ uint64_t qw0_dyn : 5;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 5;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 5;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 5;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_prot : 1;
+ uint64_t inner_prot : 1;
+ uint64_t padding : 10;
+
+ uint32_t mask[10];
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -300,7 +317,10 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group);
void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_flm_ft_data *data);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index acbf54c485..18d8573bb7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -100,6 +100,11 @@ static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
hw_mod_flm_control_flush(&ndev->be);
+ for (uint32_t i = 1; i < ndev->be.flm.nb_categories; ++i)
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, i, 0x0);
+
+ hw_mod_flm_rcp_flush(&ndev->be, 1, ndev->be.flm.nb_categories - 1);
+
/* Wait for FLM to enter Idle state */
for (uint32_t i = 0; i < 1000000; ++i) {
uint32_t value = 0;
@@ -2664,8 +2669,6 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint32_t *packet_data, uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
- (void)packet_mask;
- (void)key_def;
(void)forced_vlan_vid;
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
@@ -2698,6 +2701,31 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
* Flow for group 1..32
*/
+ /* Setup FLM RCP */
+ struct hw_db_inline_flm_rcp_data flm_data = {
+ .qw0_dyn = key_def->qw0_dyn,
+ .qw0_ofs = key_def->qw0_ofs,
+ .qw4_dyn = key_def->qw4_dyn,
+ .qw4_ofs = key_def->qw4_ofs,
+ .sw8_dyn = key_def->sw8_dyn,
+ .sw8_ofs = key_def->sw8_ofs,
+ .sw9_dyn = key_def->sw9_dyn,
+ .sw9_ofs = key_def->sw9_ofs,
+ .outer_prot = key_def->outer_proto,
+ .inner_prot = key_def->inner_proto,
+ };
+ memcpy(flm_data.mask, packet_mask, sizeof(uint32_t) * 10);
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, &flm_data,
+ attr->group);
+ fh->db_idxs[fh->db_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup Actions */
uint16_t flm_rpl_ext_ptr = 0;
uint32_t flm_ft = 0;
@@ -2710,7 +2738,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
/* Program flow */
- convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ convert_fh_to_fh_flm(fh, packet_data, flm_idx.id1 + 2, flm_ft, flm_rpl_ext_ptr,
flm_scrub, attr->priority & 0x3);
flm_flow_programming(fh, NT_FLM_OP_LEARN);
@@ -3282,6 +3310,12 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, 0, 0);
+ hw_mod_flm_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
+ flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 35/73] net/ntnic: add learn flow queue handling
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (33 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
` (41 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements thread for handling flow learn queue
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 5 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 33 +++++++
.../flow_api/profile_inline/flm_lrn_queue.c | 42 +++++++++
.../flow_api/profile_inline/flm_lrn_queue.h | 11 +++
.../profile_inline/flow_api_profile_inline.c | 48 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 94 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 241 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 13722c30a9..17d5755634 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,11 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt);
+
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
struct hsh_func_s {
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8017aa4fc3..8ebdd98db0 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -14,6 +14,7 @@ typedef struct ntdrv_4ga_s {
char *p_drv_name;
volatile bool b_shutdown;
+ rte_thread_t flm_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 0a7e90c04f..f4c29b8bde 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,3 +712,36 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ int ret = 0;
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_LRN_DATA:
+ ret = be->iface->flm_lrn_data_flush(be->be_dev, &be->flm, value, records,
+ handled_records,
+ (sizeof(struct flm_v25_lrn_data_s) /
+ sizeof(uint32_t)),
+ inf_word_cnt, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
index ad7efafe08..6e77c28f93 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -13,8 +13,28 @@
#include "flm_lrn_queue.h"
+#define QUEUE_SIZE (1 << 13)
+
#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+void *flm_lrn_queue_create(void)
+{
+ static_assert((ELEM_SIZE & ~(size_t)3) == ELEM_SIZE, "FLM LEARN struct size");
+ struct rte_ring *q = rte_ring_create_elem("RFQ",
+ ELEM_SIZE,
+ QUEUE_SIZE,
+ SOCKET_ID_ANY,
+ RING_F_MP_HTS_ENQ | RING_F_SC_DEQ);
+ assert(q != NULL);
+ return q;
+}
+
+void flm_lrn_queue_free(void *q)
+{
+ if (q)
+ rte_ring_free(q);
+}
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q)
{
struct rte_ring_zc_data zcd;
@@ -26,3 +46,25 @@ void flm_lrn_queue_release_write_buffer(void *q)
{
rte_ring_enqueue_zc_elem_finish(q, 1);
}
+
+read_record flm_lrn_queue_get_read_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ read_record rr;
+
+ if (rte_ring_dequeue_zc_burst_elem_start(q, ELEM_SIZE, QUEUE_SIZE, &zcd, NULL) != 0) {
+ rr.num = zcd.n1;
+ rr.p = zcd.ptr1;
+
+ } else {
+ rr.num = 0;
+ rr.p = NULL;
+ }
+
+ return rr;
+}
+
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num)
+{
+ rte_ring_dequeue_zc_elem_finish(q, num);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
index 8cee0c8e78..40558f4201 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -8,7 +8,18 @@
#include <stdint.h>
+typedef struct read_record {
+ uint32_t *p;
+ uint32_t num;
+} read_record;
+
+void *flm_lrn_queue_create(void);
+void flm_lrn_queue_free(void *q);
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q);
void flm_lrn_queue_release_write_buffer(void *q);
+read_record flm_lrn_queue_get_read_buffer(void *q);
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num);
+
#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 18d8573bb7..cbd26629cd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -38,6 +38,48 @@
static void *flm_lrn_queue_arr;
+static void flm_setup_queues(void)
+{
+ flm_lrn_queue_arr = flm_lrn_queue_create();
+ assert(flm_lrn_queue_arr != NULL);
+}
+
+static void flm_free_queues(void)
+{
+ flm_lrn_queue_free(flm_lrn_queue_arr);
+}
+
+static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ read_record r = flm_lrn_queue_get_read_buffer(flm_lrn_queue_arr);
+
+ if (r.num) {
+ uint32_t handled_records = 0;
+
+ if (hw_mod_flm_lrn_data_set_flush(&dev->ndev->be, HW_FLM_FLOW_LRN_DATA, r.p, r.num,
+ &handled_records, inf_word_cnt, sta_word_cnt)) {
+ NT_LOG(ERR, FILTER, "Flow programming failed");
+
+ } else if (handled_records > 0) {
+ flm_lrn_queue_release_read_buffer(flm_lrn_queue_arr, handled_records);
+ }
+ }
+
+ return r.num;
+}
+
+static uint32_t flm_update(struct flow_eth_dev *dev)
+{
+ static uint32_t inf_word_cnt;
+ static uint32_t sta_word_cnt;
+
+ if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
+ return 1;
+
+ return inf_word_cnt + sta_word_cnt;
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -4223,6 +4265,12 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * NT Flow FLM Meter API
+ */
+ .flm_setup_queues = flm_setup_queues,
+ .flm_free_queues = flm_free_queues,
+ .flm_update = flm_update,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a509a8eb51..bfca8f28b1 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -24,6 +24,11 @@
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
+#define THREAD_JOIN(a) rte_thread_join(a, NULL)
+#define THREAD_FUNC static uint32_t
+#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
@@ -120,6 +125,16 @@ store_pdrv(struct drv_s *p_drv)
rte_spinlock_unlock(&hwlock);
}
+static void clear_pdrv(struct drv_s *p_drv)
+{
+ if (p_drv->adapter_no > NUM_ADAPTER_MAX)
+ return;
+
+ rte_spinlock_lock(&hwlock);
+ _g_p_drv[p_drv->adapter_no] = NULL;
+ rte_spinlock_unlock(&hwlock);
+}
+
static struct drv_s *
get_pdrv_from_pci(struct rte_pci_addr addr)
{
@@ -1240,6 +1255,13 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
static void
drv_deinit(struct drv_s *p_drv)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return;
+ }
+
const struct adapter_ops *adapter_ops = get_adapter_ops();
if (adapter_ops == NULL) {
@@ -1251,6 +1273,22 @@ drv_deinit(struct drv_s *p_drv)
return;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ fpga_info_t *fpga_info = &p_nt_drv->adapter_info.fpga_info;
+
+ /*
+ * Mark the global pdrv for cleared. Used by some threads to terminate.
+ * 1 second to give the threads a chance to see the termonation.
+ */
+ clear_pdrv(p_drv);
+ nt_os_wait_usec(1000000);
+
+ /* stop statistics threads */
+ p_drv->ntdrv.b_shutdown = true;
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ }
/* stop adapter */
adapter_ops->deinit(&p_nt_drv->adapter_info);
@@ -1359,6 +1397,43 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.promiscuous_enable = promiscuous_enable,
};
+/*
+ * Adapter flm stat thread
+ */
+THREAD_FUNC adapter_flm_update_thread_fn(void *context)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: profile_inline module uninitialized", __func__);
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct nt4ga_filter_s *p_nt4ga_filter = &p_adapter_info->nt4ga_filter;
+ struct flow_nic_dev *p_flow_nic_dev = p_nt4ga_filter->mp_flow_device;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: waiting for port configuration",
+ p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (p_flow_nic_dev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ struct flow_eth_dev *dev = p_flow_nic_dev->eth_base;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: begin", p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (!p_drv->ntdrv.b_shutdown)
+ if (profile_inline_ops->flm_update(dev) == 0)
+ nt_os_wait_usec(10);
+
+ NT_LOG(DBG, NTNIC, "%s: %s: end", p_adapter_info->mp_adapter_id_str, __func__);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1369,6 +1444,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* Return statement is not necessary here to allow traffic processing by SW */
}
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1597,6 +1679,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (profile_inline_ops != NULL && fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ profile_inline_ops->flm_setup_queues();
+ res = THREAD_CTRL_CREATE(&p_nt_drv->flm_thread, "ntnic-nt_flm_update_thr",
+ adapter_flm_update_thread_fn, (void *)p_drv);
+
+ if (res) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1069be2f85..27d6cbef01 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -256,6 +256,13 @@ struct profile_inline_ops {
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+
+ /*
+ * NT Flow FLM queue API
+ */
+ void (*flm_setup_queues)(void);
+ void (*flm_free_queues)(void);
+ uint32_t (*flm_update)(struct flow_eth_dev *dev);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 36/73] net/ntnic: match and action db attributes were added
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (34 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
` (40 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements match/action dereferencing
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_hw_db_inline.c | 795 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 35 +
.../profile_inline/flow_api_profile_inline.c | 55 ++
3 files changed, 885 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 089f8c8174..06493d0938 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -8,6 +8,9 @@
#include "flow_api_hw_db_inline.h"
+#define HW_DB_INLINE_ACTION_SET_NB 512
+#define HW_DB_INLINE_MATCH_SET_NB 512
+
#define HW_DB_FT_LOOKUP_KEY_A 0
#define HW_DB_FT_TYPE_KM 1
@@ -109,6 +112,20 @@ struct hw_db_inline_resource_db {
int cfn_hw;
int ref;
} *cfn;
+
+ uint32_t cfn_priority_counter;
+ uint32_t set_priority_counter;
+
+ struct hw_db_inline_resource_db_action_set {
+ struct hw_db_inline_action_set_data data;
+ int ref;
+ } action_set[HW_DB_INLINE_ACTION_SET_NB];
+
+ struct hw_db_inline_resource_db_match_set {
+ struct hw_db_inline_match_set_data data;
+ int ref;
+ uint32_t set_priority;
+ } match_set[HW_DB_INLINE_MATCH_SET_NB];
};
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
@@ -291,6 +308,16 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ hw_db_inline_match_set_deref(ndev, db_handle,
+ *(struct hw_db_match_set_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ hw_db_inline_action_set_deref(ndev, db_handle,
+ *(struct hw_db_action_set_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_CAT:
hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
break;
@@ -359,6 +386,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_NONE:
return NULL;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ return &db->match_set[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ return &db->action_set[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_CAT:
return &db->cat[idxs[i].ids].data;
@@ -551,6 +584,763 @@ static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int fl
}
+static void hw_db_copy_ft(struct flow_nic_dev *ndev, int type, int cfn_dst, int cfn_src,
+ int lookup, int flow_type)
+{
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index_dst = (8 * flow_type + cfn_dst / cat_funcs) * max_lookups + lookup;
+ int fte_field_dst = cfn_dst % cat_funcs;
+
+ int fte_index_src = (8 * flow_type + cfn_src / cat_funcs) * max_lookups + lookup;
+ int fte_field_src = cfn_src % cat_funcs;
+
+ uint32_t current_bm_dst = 0;
+ uint32_t current_bm_src = 0;
+ uint32_t fte_field_bm_dst = 1 << fte_field_dst;
+ uint32_t fte_field_bm_src = 1 << fte_field_src;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t enable = current_bm_src & fte_field_bm_src;
+ uint32_t final_bm_dst = enable ? (fte_field_bm_dst | current_bm_dst)
+ : (~fte_field_bm_dst & current_bm_dst);
+
+ if (current_bm_dst != final_bm_dst) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+
+static int hw_db_inline_filter_apply(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id,
+ struct hw_db_match_set_idx match_set_idx,
+ struct hw_db_flm_ft flm_ft_idx,
+ struct hw_db_action_set_idx action_set_idx)
+{
+ (void)match_set_idx;
+ (void)flm_ft_idx;
+
+ const struct hw_db_inline_match_set_data *match_set =
+ &db->match_set[match_set_idx.ids].data;
+ const struct hw_db_inline_cat_data *cat = &db->cat[match_set->cat.ids].data;
+
+ const int km_ft = match_set->km_ft.id1;
+ const int km_rcp = (int)db->km[match_set->km.id1].data.rcp;
+
+ const int flm_ft = flm_ft_idx.id1;
+ const int flm_rcp = flm_ft_idx.id2;
+
+ const struct hw_db_inline_action_set_data *action_set =
+ &db->action_set[action_set_idx.ids].data;
+ const struct hw_db_inline_cot_data *cot = &db->cot[action_set->cot.ids].data;
+
+ const int qsl_hw_id = action_set->qsl.ids;
+ const int slc_lr_hw_id = action_set->slc_lr.ids;
+ const int tpe_hw_id = action_set->tpe.ids;
+ const int hsh_hw_id = action_set->hsh.ids;
+
+ /* Setup default FLM RCP if needed */
+ if (flm_rcp > 0 && db->flm[flm_rcp].ref <= 0)
+ hw_db_inline_setup_default_flm_rcp(ndev, flm_rcp);
+
+ /* Setup CAT.CFN */
+ {
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x0);
+
+ /* Protocol checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_ISL, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_CFP, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MAC, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L2, cat_hw_id, 0, cat->ptc_mask_l2);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VNTAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VLAN, cat_hw_id, 0, cat->vlan_mask);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, cat->ptc_mask_l3);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_FRAG, cat_hw_id, 0,
+ cat->ptc_mask_frag);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_IP_PROT, cat_hw_id, 0, cat->ip_prot);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L4, cat_hw_id, 0, cat->ptc_mask_l4);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TUNNEL, cat_hw_id, 0,
+ cat->ptc_mask_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L2, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_VLAN, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L3, cat_hw_id, 0,
+ cat->ptc_mask_l3_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_FRAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_IP_PROT, cat_hw_id, 0,
+ cat->ip_prot_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L4, cat_hw_id, 0,
+ cat->ptc_mask_l4_tunnel);
+
+ /* Error checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_CV, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_FCS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TRUNC, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L3_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L4_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L3_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L4_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl_tunnel);
+
+ /* MAC port check */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_MAC_PORT, cat_hw_id, 0,
+ cat->mac_port_mask);
+
+ /* Pattern match checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMP, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_DCT, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_EXT_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMB, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_AND_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_OR_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_INV, cat_hw_id, 0, -1);
+
+ /* Length checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC_INV, cat_hw_id, 0, -1);
+
+ /* KM and FLM */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3);
+
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 0, cat_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 0, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 1, hsh_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 2, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 2,
+ slc_lr_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 5, tpe_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 5, 0);
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id,
+ 0x001 | 0x004 | (qsl_hw_id ? 0x008 : 0) |
+ (slc_lr_hw_id ? 0x020 : 0) | 0x040 |
+ (tpe_hw_id ? 0x400 : 0));
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ km_rcp);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ flm_rcp);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, flm_ft, 1);
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COLOR, cat_hw_id, cot->frag_rcp << 10);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_KM, cat_hw_id,
+ cot->matcher_color_contrib);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ return 0;
+}
+
+static void hw_db_inline_filter_clear(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id)
+{
+ /* Setup CAT.CFN */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < 6; ++i) {
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + i, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + i, 0);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0);
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft,
+ 0);
+ }
+ }
+
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+}
+
+static void hw_db_inline_filter_copy(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db, int cfn_dst, int cfn_src)
+{
+ uint32_t val = 0;
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_COPY_FROM, cfn_dst, 0, cfn_src);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < offset; ++i) {
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_dst + i, val);
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_dst + i, val);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cfn_dst, offset);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_get(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_src, &val);
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_dst, val);
+ hw_mod_cat_cte_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_km_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_KM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_flm_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_C, ft);
+ }
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COPY_FROM, cfn_dst, cfn_src);
+ hw_mod_cat_cot_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+}
+
+/*
+ * Algorithm for moving CFN entries to make space with respect of priority.
+ * The algorithm will make the fewest possible moves to fit a new CFN entry.
+ */
+static int hw_db_inline_alloc_prioritized_cfn(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ struct hw_db_match_set_idx match_set_idx)
+{
+ const struct hw_db_inline_resource_db_match_set *match_set =
+ &db->match_set[match_set_idx.ids];
+
+ uint64_t priority = ((uint64_t)(match_set->data.priority & 0xff) << 56) |
+ ((uint64_t)(0xffffff - (match_set->set_priority & 0xffffff)) << 32) |
+ (0xffffffff - ++db->cfn_priority_counter);
+
+ int db_cfn_idx = -1;
+
+ struct {
+ uint64_t priority;
+ uint32_t idx;
+ } sorted_priority[db->nb_cat];
+
+ memset(sorted_priority, 0x0, sizeof(sorted_priority));
+
+ uint32_t in_use_count = 0;
+
+ for (uint32_t i = 1; i < db->nb_cat; ++i) {
+ if (db->cfn[i].ref > 0) {
+ sorted_priority[db->cfn[i].cfn_hw].priority = db->cfn[i].priority;
+ sorted_priority[db->cfn[i].cfn_hw].idx = i;
+ in_use_count += 1;
+
+ } else if (db_cfn_idx == -1) {
+ db_cfn_idx = (int)i;
+ }
+ }
+
+ if (in_use_count >= db->nb_cat - 1)
+ return -1;
+
+ if (in_use_count == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = 1;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ int goal = 1;
+ int free_before = -1000000;
+ int free_after = 1000000;
+ int found_smaller = 0;
+
+ for (int i = 1; i < (int)db->nb_cat; ++i) {
+ if (sorted_priority[i].priority > priority) { /* Bigger */
+ goal = i + 1;
+
+ } else if (sorted_priority[i].priority == 0) { /* Not set */
+ if (found_smaller) {
+ if (free_after > i)
+ free_after = i;
+
+ } else {
+ free_before = i;
+ }
+
+ } else {/* Smaller */
+ found_smaller = 1;
+ }
+ }
+
+ int diff_before = goal - free_before - 1;
+ int diff_after = free_after - goal;
+
+ if (goal < (int)db->nb_cat && sorted_priority[goal].priority == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ if (diff_after <= diff_before) {
+ for (int i = free_after; i > goal; --i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i - 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+
+ } else {
+ goal -= 1;
+
+ for (int i = free_before; i < goal; ++i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i + 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+ }
+
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+
+ return db_cfn_idx;
+}
+
+static void hw_db_inline_free_prioritized_cfn(struct hw_db_inline_resource_db *db, int cfn_hw)
+{
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (db->cfn[i].cfn_hw == cfn_hw) {
+ memset(&db->cfn[i], 0x0, sizeof(struct hw_db_inline_resource_db_cfn));
+ break;
+ }
+ }
+}
+
+static void hw_db_inline_update_active_filters(struct flow_nic_dev *ndev, void *db_handle,
+ int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[group];
+ struct hw_db_inline_resource_db_flm_cfn_map *cell;
+
+ for (uint32_t match_set_idx = 0; match_set_idx < db->nb_cat; ++match_set_idx) {
+ for (uint32_t ft_idx = 0; ft_idx < db->nb_flm_ft; ++ft_idx) {
+ int active = flm_rcp->ft[ft_idx].ref > 0 &&
+ flm_rcp->match_set[match_set_idx].ref > 0;
+ cell = &flm_rcp->cfn_map[match_set_idx * db->nb_flm_ft + ft_idx];
+
+ if (active && cell->cfn_idx == 0) {
+ /* Setup filter */
+ cell->cfn_idx = hw_db_inline_alloc_prioritized_cfn(ndev, db,
+ flm_rcp->match_set[match_set_idx].idx);
+ hw_db_inline_filter_apply(ndev, db, db->cfn[cell->cfn_idx].cfn_hw,
+ flm_rcp->match_set[match_set_idx].idx,
+ flm_rcp->ft[ft_idx].idx,
+ group == 0
+ ? db->match_set[flm_rcp->match_set[match_set_idx]
+ .idx.ids]
+ .data.action_set
+ : flm_rcp->ft[ft_idx].data.action_set);
+ }
+
+ if (!active && cell->cfn_idx > 0) {
+ /* Teardown filter */
+ hw_db_inline_filter_clear(ndev, db, db->cfn[cell->cfn_idx].cfn_hw);
+ hw_db_inline_free_prioritized_cfn(db,
+ db->cfn[cell->cfn_idx].cfn_hw);
+ cell->cfn_idx = 0;
+ }
+ }
+ }
+}
+
+
+/******************************************************************************/
+/* Match set */
+/******************************************************************************/
+
+static int hw_db_inline_match_set_compare(const struct hw_db_inline_match_set_data *data1,
+ const struct hw_db_inline_match_set_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->km_ft.raw == data2->km_ft.raw && data1->jump == data2->jump;
+}
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_match_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_MATCH_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_MATCH_SET_NB; ++i) {
+ if (!found && db->match_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->match_set[i].ref > 0 &&
+ hw_db_inline_match_set_compare(data, &db->match_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_match_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ found = 0;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].ref <= 0) {
+ found = 1;
+ flm_rcp->match_set[i].ref = 1;
+ flm_rcp->match_set[i].idx.raw = idx.raw;
+ break;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->match_set[idx.ids].data, data, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 1;
+ db->match_set[idx.ids].set_priority = ++db->set_priority_counter;
+
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
+ return idx;
+}
+
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->match_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+ int jump;
+
+ if (idx.error)
+ return;
+
+ db->match_set[idx.ids].ref -= 1;
+
+ if (db->match_set[idx.ids].ref > 0)
+ return;
+
+ jump = db->match_set[idx.ids].data.jump;
+ flm_rcp = &db->flm[jump];
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].idx.raw == idx.raw) {
+ flm_rcp->match_set[i].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, jump);
+ memset(&flm_rcp->match_set[i], 0x0,
+ sizeof(struct hw_db_inline_resource_db_flm_match_set));
+ }
+ }
+
+ memset(&db->match_set[idx.ids].data, 0x0, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 0;
+}
+
+/******************************************************************************/
+/* Action set */
+/******************************************************************************/
+
+static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_data *data1,
+ const struct hw_db_inline_action_set_data *data2)
+{
+ if (data1->contains_jump)
+ return data2->contains_jump && data1->jump == data2->jump;
+
+ return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
+ data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
+ data1->hsh.raw == data2->hsh.raw;
+}
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_action_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_ACTION_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_ACTION_SET_NB; ++i) {
+ if (!found && db->action_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->action_set[i].ref > 0 &&
+ hw_db_inline_action_set_compare(data, &db->action_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_action_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->action_set[idx.ids].data, data, sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->action_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->action_set[idx.ids].ref -= 1;
+
+ if (db->action_set[idx.ids].ref <= 0) {
+ memset(&db->action_set[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1592,6 +2382,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
return idx;
}
@@ -1646,6 +2438,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->group);
+
return idx;
}
@@ -1676,6 +2470,7 @@ void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struc
return;
flm_rcp->ft[idx.id1].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, idx.id2);
memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 9820225ffa..33de674b72 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -131,6 +131,10 @@ struct hw_db_hsh_idx {
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
+
+ HW_DB_IDX_TYPE_MATCH_SET,
+ HW_DB_IDX_TYPE_ACTION_SET,
+
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
@@ -145,6 +149,17 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_HSH,
};
+/* Container types */
+struct hw_db_inline_match_set_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_km_ft km_ft;
+ struct hw_db_action_set_idx action_set;
+ int jump;
+
+ uint8_t priority;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -224,6 +239,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
@@ -262,6 +278,25 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data);
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data);
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+
+/**/
+
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_cot_data *data);
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index cbd26629cd..149b354bcb 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2681,10 +2681,30 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup Action Set */
+ struct hw_db_inline_action_set_data action_set_data = {
+ .contains_jump = 0,
+ .cot = cot_idx,
+ .qsl = qsl_idx,
+ .slc_lr = slc_lr_idx,
+ .tpe = tpe_idx,
+ .hsh = hsh_idx,
+ };
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
+ local_idxs[(*local_idx_counter)++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 0,
.group = group,
+ .action_set = action_set_idx,
};
struct hw_db_flm_ft flm_ft_idx = empty_pattern
? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
@@ -2872,6 +2892,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
}
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &action_set_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup CAT */
struct hw_db_inline_cat_data cat_data = {
.vlan_mask = (0xf << fd->vlans) & 0xf,
@@ -2991,6 +3023,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
struct hw_db_inline_km_ft_data km_ft_data = {
.cat = cat_idx,
.km = km_idx,
+ .action_set = action_set_idx,
};
struct hw_db_km_ft km_ft_idx =
hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
@@ -3027,10 +3060,32 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup Match Set */
+ struct hw_db_inline_match_set_data match_set_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ .km_ft = km_ft_idx,
+ .action_set = action_set_idx,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .priority = attr->priority & 0xff,
+ };
+ struct hw_db_match_set_idx match_set_idx =
+ hw_db_inline_match_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &match_set_data);
+ fh->db_idxs[fh->db_idx_counter++] = match_set_idx.raw;
+
+ if (match_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Match Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 1,
.jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .action_set = action_set_idx,
+
};
struct hw_db_flm_ft flm_ft_idx =
hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 37/73] net/ntnic: add flow dump feature
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (35 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 23:10 ` Stephen Hemminger
2024-10-21 21:04 ` [PATCH v1 38/73] net/ntnic: add flow flush Serhii Iliushyk
` (39 subsequent siblings)
76 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add posibilyty to dump flow in human readable format
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 17 ++
.../profile_inline/flow_api_hw_db_inline.c | 264 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 3 +
.../profile_inline/flow_api_profile_inline.c | 81 ++++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 29 ++
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
8 files changed, 413 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index e52363f04e..155a9e1fd6 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -281,6 +281,8 @@ struct flow_handle {
struct flow_handle *next;
struct flow_handle *prev;
+ /* Flow specific pointer to application data stored during action creation. */
+ void *context;
void *user_data;
union {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index c6b818a36b..6266f722a1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1016,6 +1016,22 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
return 0;
}
+static int flow_dev_dump(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_dev_dump_profile_inline(dev, flow, caller_id, file, error);
+}
+
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf)
{
@@ -1041,6 +1057,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_dev_dump = flow_dev_dump,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 06493d0938..c48076e0d8 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -371,6 +371,270 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ char str_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(str_buffer);
+
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_MATCH_SET: {
+ const struct hw_db_inline_match_set_data *data =
+ &db->match_set[idxs[i].ids].data;
+ fprintf(file, " MATCH_SET %d, priority %d\n", idxs[i].ids,
+ (int)data->priority);
+ fprintf(file, " CAT id %d, KM id %d, KM_FT id %d, ACTION_SET id %d\n",
+ data->cat.ids, data->km.id1, data->km_ft.id1,
+ data->action_set.ids);
+
+ if (data->jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_ACTION_SET: {
+ const struct hw_db_inline_action_set_data *data =
+ &db->action_set[idxs[i].ids].data;
+ fprintf(file, " ACTION_SET %d\n", idxs[i].ids);
+
+ if (data->contains_jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ else
+ fprintf(file,
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ data->cot.ids, data->qsl.ids, data->slc_lr.ids,
+ data->tpe.ids, data->hsh.ids);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_CAT: {
+ const struct hw_db_inline_cat_data *data = &db->cat[idxs[i].ids].data;
+ fprintf(file, " CAT %d\n", idxs[i].ids);
+ fprintf(file, " Port msk 0x%02x, VLAN msk 0x%02x\n",
+ (int)data->mac_port_mask, (int)data->vlan_mask);
+ fprintf(file,
+ " Proto msks: Frag 0x%02x, l2 0x%02x, l3 0x%02x, l4 0x%02x, l3t 0x%02x, l4t 0x%02x\n",
+ (int)data->ptc_mask_frag, (int)data->ptc_mask_l2,
+ (int)data->ptc_mask_l3, (int)data->ptc_mask_l4,
+ (int)data->ptc_mask_l3_tunnel, (int)data->ptc_mask_l4_tunnel);
+ fprintf(file, " IP protocol: pn %u pnt %u\n", data->ip_prot,
+ data->ip_prot_tunnel);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_QSL: {
+ const struct hw_db_inline_qsl_data *data = &db->qsl[idxs[i].ids].data;
+ fprintf(file, " QSL %d\n", idxs[i].ids);
+
+ if (data->discard) {
+ fprintf(file, " Discard\n");
+ break;
+ }
+
+ if (data->drop) {
+ fprintf(file, " Drop\n");
+ break;
+ }
+
+ fprintf(file, " Table size %d\n", data->table_size);
+
+ for (uint32_t i = 0;
+ i < data->table_size && i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ fprintf(file, " %u: Queue %d, TX port %d\n", i,
+ (data->table[i].queue_en ? (int)data->table[i].queue : -1),
+ (data->table[i].tx_port_en ? (int)data->table[i].tx_port
+ : -1));
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_COT: {
+ const struct hw_db_inline_cot_data *data = &db->cot[idxs[i].ids].data;
+ fprintf(file, " COT %d\n", idxs[i].ids);
+ fprintf(file, " Color contrib %d, frag rcp %d\n",
+ (int)data->matcher_color_contrib, (int)data->frag_rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_SLC_LR: {
+ const struct hw_db_inline_slc_lr_data *data =
+ &db->slc_lr[idxs[i].ids].data;
+ fprintf(file, " SLC_LR %d\n", idxs[i].ids);
+ fprintf(file, " Enable %u, dyn %u, ofs %u\n", data->head_slice_en,
+ data->head_slice_dyn, data->head_slice_ofs);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE: {
+ const struct hw_db_inline_tpe_data *data = &db->tpe[idxs[i].ids].data;
+ fprintf(file, " TPE %d\n", idxs[i].ids);
+ fprintf(file, " Insert len %u, new outer %u, calc eth %u\n",
+ data->insert_len, data->new_outer,
+ data->calc_eth_type_from_inner_ip);
+ fprintf(file, " TTL enable %u, dyn %u, ofs %u\n", data->ttl_en,
+ data->ttl_dyn, data->ttl_ofs);
+ fprintf(file,
+ " Len A enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_a_en, data->len_a_pos_dyn, data->len_a_pos_ofs,
+ data->len_a_add_dyn, data->len_a_add_ofs, data->len_a_sub_dyn);
+ fprintf(file,
+ " Len B enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_b_en, data->len_b_pos_dyn, data->len_b_pos_ofs,
+ data->len_b_add_dyn, data->len_b_add_ofs, data->len_b_sub_dyn);
+ fprintf(file,
+ " Len C enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_c_en, data->len_c_pos_dyn, data->len_c_pos_ofs,
+ data->len_c_add_dyn, data->len_c_add_ofs, data->len_c_sub_dyn);
+
+ for (uint32_t i = 0; i < 6; ++i)
+ if (data->writer[i].en)
+ fprintf(file,
+ " Writer %i: Reader %u, dyn %u, ofs %u, len %u\n",
+ i, data->writer[i].reader_select,
+ data->writer[i].dyn, data->writer[i].ofs,
+ data->writer[i].len);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE_EXT: {
+ const struct hw_db_inline_tpe_ext_data *data =
+ &db->tpe_ext[idxs[i].ids].data;
+ const int rpl_rpl_length = ((int)data->size + 15) / 16;
+ fprintf(file, " TPE_EXT %d\n", idxs[i].ids);
+ fprintf(file, " Encap data, size %u\n", data->size);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ fprintf(file, " ");
+
+ for (int n = 15; n >= 0; --n)
+ fprintf(file, " %02x%s", data->hdr8[i * 16 + n],
+ n == 8 ? " " : "");
+
+ fprintf(file, "\n");
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_RCP: {
+ const struct hw_db_inline_flm_rcp_data *data = &db->flm[idxs[i].id1].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " QW0 dyn %u, ofs %u, QW4 dyn %u, ofs %u\n",
+ data->qw0_dyn, data->qw0_ofs, data->qw4_dyn, data->qw4_ofs);
+ fprintf(file, " SW8 dyn %u, ofs %u, SW9 dyn %u, ofs %u\n",
+ data->sw8_dyn, data->sw8_ofs, data->sw9_dyn, data->sw9_ofs);
+ fprintf(file, " Outer prot %u, inner prot %u\n", data->outer_prot,
+ data->inner_prot);
+ fprintf(file, " Mask:\n");
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[0],
+ data->mask[1], data->mask[2], data->mask[3], data->mask[4]);
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[5],
+ data->mask[6], data->mask[7], data->mask[8], data->mask[9]);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_FT: {
+ const struct hw_db_inline_flm_ft_data *data =
+ &db->flm[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " FLM_FT %d\n", idxs[i].id1);
+
+ if (data->is_group_zero)
+ fprintf(file, " Jump to %d\n", data->jump);
+
+ else
+ fprintf(file, " Group %d\n", data->group);
+
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_RCP: {
+ const struct hw_db_inline_km_rcp_data *data = &db->km[idxs[i].id1].data;
+ fprintf(file, " KM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " HW id %u\n", data->rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_FT: {
+ const struct hw_db_inline_km_ft_data *data =
+ &db->km[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " KM_FT %d\n", idxs[i].id1);
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ fprintf(file, " KM_RCP id %d\n", data->km.ids);
+ fprintf(file, " CAT id %d\n", data->cat.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_HSH: {
+ const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
+ fprintf(file, " HSH %d\n", idxs[i].ids);
+
+ switch (data->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ fprintf(file, " Func: NTH10\n");
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ fprintf(file, " Func: Toeplitz\n");
+ fprintf(file, " Key:");
+
+ for (uint8_t i = 0; i < MAX_RSS_KEY_LEN; i++) {
+ if (i % 10 == 0)
+ fprintf(file, "\n ");
+
+ fprintf(file, " %02x", data->key[i]);
+ }
+
+ fprintf(file, "\n");
+ break;
+
+ default:
+ fprintf(file, " Func: %u\n", data->func);
+ }
+
+ fprintf(file, " Hash mask hex:\n");
+ fprintf(file, " %016lx\n", data->hash_mask);
+
+ /* convert hash mask to human readable RTE_ETH_RSS_* form if possible */
+ if (sprint_nt_rss_mask(str_buffer, rss_buffer_len, "\n ",
+ data->hash_mask) == 0) {
+ fprintf(file, " Hash mask flags:%s\n", str_buffer);
+ }
+
+ break;
+ }
+
+ default: {
+ fprintf(file, " Unknown item. Type %u\n", idxs[i].type);
+ break;
+ }
+ }
+ }
+}
+
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ fprintf(file, "CFN status:\n");
+
+ for (uint32_t id = 0; id < db->nb_cat; ++id)
+ if (db->cfn[id].cfn_hw)
+ fprintf(file, " ID %d, HW id %d, priority 0x%" PRIx64 "\n", (int)id,
+ db->cfn[id].cfn_hw, db->cfn[id].priority);
+}
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 33de674b72..a9d31c86ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -276,6 +276,9 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file);
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
/**/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 149b354bcb..39d0677402 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4305,6 +4305,86 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
return res;
}
+static void dump_flm_data(const uint32_t *data, FILE *file)
+{
+ for (unsigned int i = 0; i < 10; ++i) {
+ fprintf(file, "%s%02X %02X %02X %02X%s", i % 2 ? "" : " ",
+ (data[i] >> 24) & 0xff, (data[i] >> 16) & 0xff, (data[i] >> 8) & 0xff,
+ data[i] & 0xff, i % 2 ? "\n" : " ");
+ }
+}
+
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ if (flow != NULL) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLM) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+
+ } else {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs, flow->db_idx_counter,
+ file);
+ }
+
+ } else {
+ int max_flm_count = 1000;
+
+ hw_db_inline_dump_cfn(dev->ndev, dev->ndev->hw_db_handle, file);
+
+ flow = dev->ndev->flow_base;
+
+ while (flow) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs,
+ flow->db_idx_counter, file);
+ }
+
+ flow = flow->next;
+ }
+
+ flow = dev->ndev->flow_base_flm;
+
+ while (flow && max_flm_count >= 0) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+ max_flm_count -= 1;
+ }
+
+ flow = flow->next;
+ }
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
static const struct profile_inline_ops ops = {
/*
@@ -4313,6 +4393,7 @@ static const struct profile_inline_ops ops = {
.done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
.initialize_flow_management_of_ndev_profile_inline =
initialize_flow_management_of_ndev_profile_inline,
+ .flow_dev_dump_profile_inline = flow_dev_dump_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e623bb2352..2c76a2c023 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index df391b6399..5505198148 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,9 +569,38 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: flow_filter module uninitialized", __func__);
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_dev_dump(internals->flw_dev,
+ is_flow_handle_typecast(flow) ? (void *)flow
+ : flow->flw_hdl,
+ caller_id, file, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .dev_dump = eth_flow_dev_dump,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 27d6cbef01..cef655c5e0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,12 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -284,6 +290,11 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ int (*flow_dev_dump)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
/*
* NT Flow API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 38/73] net/ntnic: add flow flush
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (36 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
` (38 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Implements flow flush API
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 13 ++++++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 4 ++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 38 ++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +++
5 files changed, 105 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 6266f722a1..a2cb9a68b4 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -263,6 +263,18 @@ static int flow_destroy(struct flow_eth_dev *dev, struct flow_handle *flow,
return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
+static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
+}
+
/*
* Device Management API
*/
@@ -1057,6 +1069,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 39d0677402..0cb9451390 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3640,6 +3640,48 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ /*
+ * Delete all created FLM flows from this eth device.
+ * FLM flows must be deleted first because normal flows are their parents.
+ */
+ struct flow_handle *flow = dev->ndev->flow_base_flm;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ /* Delete all created flows from this eth device */
+ flow = dev->ndev->flow_base;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ return err;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -4400,6 +4442,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
* NT Flow FLM Meter API
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 2c76a2c023..c695842077 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,10 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 5505198148..87b26bd315 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,6 +569,43 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ int res = 0;
+ /* Main application caller_id is port_id shifted above VDPA ports */
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (internals->flw_dev) {
+ res = flow_filter_ops->flow_flush(internals->flw_dev, caller_id, &flow_error);
+ rte_spinlock_lock(&flow_lock);
+
+ for (int flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used && nt_flows[flow].caller_id == caller_id) {
+ /* Cleanup recorded flows */
+ nt_flows[flow].used = 0;
+ nt_flows[flow].caller_id = 0;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -600,6 +637,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index cef655c5e0..12baa13800 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,10 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_flush_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -309,6 +313,9 @@ struct flow_filter_ops {
int (*flow_destroy)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 39/73] net/ntnic: add GMF (Generic MAC Feeder) module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (37 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 38/73] net/ntnic: add flow flush Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
` (37 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The Generic MAC Feeder module provides a way to feed data
to the MAC modules directly from the FPGA,
rather than from host or physical ports.
The use case for this is as a test tool and is not used by NTNIC.
This module is requireqd for correct initialization
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 ++
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +++++++++
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 ++++++++++++++++++
5 files changed, 207 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
diff --git a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
index 8964458b47..d8e0cad7cd 100644
--- a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
+++ b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
@@ -404,6 +404,14 @@ static int _port_init(adapter_info_t *drv, nthw_fpga_t *fpga, int port)
_enable_tx(drv, mac_pcs);
_reset_rx(drv, mac_pcs);
+ /* 2.2) Nt4gaPort::setup() */
+ if (nthw_gmf_init(NULL, fpga, port) == 0) {
+ nthw_gmf_t gmf;
+
+ if (nthw_gmf_init(&gmf, fpga, port) == 0)
+ nthw_gmf_set_enable(&gmf, true);
+ }
+
/* Phase 3. Link state machine steps */
/* 3.1) Create NIM, ::createNim() */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d7e6d05556..92167d24e4 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -38,6 +38,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst9563.c',
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
+ 'nthw/core/nthw_gmf.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_core.h b/drivers/net/ntnic/nthw/core/include/nthw_core.h
index fe32891712..4073f9632c 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_core.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_core.h
@@ -17,6 +17,7 @@
#include "nthw_iic.h"
#include "nthw_i2cm.h"
+#include "nthw_gmf.h"
#include "nthw_gpio_phy.h"
#include "nthw_mac_pcs.h"
#include "nthw_sdc.h"
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_gmf.h b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
new file mode 100644
index 0000000000..cc5be85154
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_GMF_H__
+#define __NTHW_GMF_H__
+
+struct nthw_gmf {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_gmf;
+ int mn_instance;
+
+ nthw_register_t *mp_ctrl;
+ nthw_field_t *mp_ctrl_enable;
+ nthw_field_t *mp_ctrl_ifg_enable;
+ nthw_field_t *mp_ctrl_ifg_tx_now_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock;
+ nthw_field_t *mp_ctrl_ifg_auto_adjust_enable;
+ nthw_field_t *mp_ctrl_ts_inject_always;
+ nthw_field_t *mp_ctrl_fcs_always;
+
+ nthw_register_t *mp_speed;
+ nthw_field_t *mp_speed_ifg_speed;
+
+ nthw_register_t *mp_ifg_clock_delta;
+ nthw_field_t *mp_ifg_clock_delta_delta;
+
+ nthw_register_t *mp_ifg_clock_delta_adjust;
+ nthw_field_t *mp_ifg_clock_delta_adjust_delta;
+
+ nthw_register_t *mp_ifg_max_adjust_slack;
+ nthw_field_t *mp_ifg_max_adjust_slack_slack;
+
+ nthw_register_t *mp_debug_lane_marker;
+ nthw_field_t *mp_debug_lane_marker_compensation;
+
+ nthw_register_t *mp_stat_sticky;
+ nthw_field_t *mp_stat_sticky_data_underflowed;
+ nthw_field_t *mp_stat_sticky_ifg_adjusted;
+
+ nthw_register_t *mp_stat_next_pkt;
+ nthw_field_t *mp_stat_next_pkt_ns;
+
+ nthw_register_t *mp_stat_max_delayed_pkt;
+ nthw_field_t *mp_stat_max_delayed_pkt_ns;
+
+ nthw_register_t *mp_ts_inject;
+ nthw_field_t *mp_ts_inject_offset;
+ nthw_field_t *mp_ts_inject_pos;
+ int mn_param_gmf_ifg_speed_mul;
+ int mn_param_gmf_ifg_speed_div;
+
+ bool m_administrative_block; /* Used to enforce license expiry */
+};
+
+typedef struct nthw_gmf nthw_gmf_t;
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable);
+
+#endif /* __NTHW_GMF_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_gmf.c b/drivers/net/ntnic/nthw/core/nthw_gmf.c
new file mode 100644
index 0000000000..16a4c288bd
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_gmf.c
@@ -0,0 +1,133 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <limits.h>
+#include <math.h>
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_gmf.h"
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_GMF, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: GMF %d: no such instance",
+ p_fpga->p_fpga_info->mp_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_gmf = mod;
+
+ p->mp_ctrl = nthw_module_get_register(p->mp_mod_gmf, GMF_CTRL);
+ p->mp_ctrl_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_ENABLE);
+ p->mp_ctrl_ifg_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_ENABLE);
+ p->mp_ctrl_ifg_auto_adjust_enable =
+ nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_AUTO_ADJUST_ENABLE);
+ p->mp_ctrl_ts_inject_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_TS_INJECT_ALWAYS);
+ p->mp_ctrl_fcs_always = nthw_register_query_field(p->mp_ctrl, GMF_CTRL_FCS_ALWAYS);
+
+ p->mp_speed = nthw_module_get_register(p->mp_mod_gmf, GMF_SPEED);
+ p->mp_speed_ifg_speed = nthw_register_get_field(p->mp_speed, GMF_SPEED_IFG_SPEED);
+
+ p->mp_ifg_clock_delta = nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA);
+ p->mp_ifg_clock_delta_delta =
+ nthw_register_get_field(p->mp_ifg_clock_delta, GMF_IFG_SET_CLOCK_DELTA_DELTA);
+
+ p->mp_ifg_max_adjust_slack =
+ nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_MAX_ADJUST_SLACK);
+ p->mp_ifg_max_adjust_slack_slack = nthw_register_get_field(p->mp_ifg_max_adjust_slack,
+ GMF_IFG_MAX_ADJUST_SLACK_SLACK);
+
+ p->mp_debug_lane_marker = nthw_module_get_register(p->mp_mod_gmf, GMF_DEBUG_LANE_MARKER);
+ p->mp_debug_lane_marker_compensation =
+ nthw_register_get_field(p->mp_debug_lane_marker,
+ GMF_DEBUG_LANE_MARKER_COMPENSATION);
+
+ p->mp_stat_sticky = nthw_module_get_register(p->mp_mod_gmf, GMF_STAT_STICKY);
+ p->mp_stat_sticky_data_underflowed =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_DATA_UNDERFLOWED);
+ p->mp_stat_sticky_ifg_adjusted =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_IFG_ADJUSTED);
+
+ p->mn_param_gmf_ifg_speed_mul =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_MUL, 1);
+ p->mn_param_gmf_ifg_speed_div =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_DIV, 1);
+
+ p->m_administrative_block = false;
+
+ p->mp_stat_next_pkt = nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_NEXT_PKT);
+
+ if (p->mp_stat_next_pkt) {
+ p->mp_stat_next_pkt_ns =
+ nthw_register_query_field(p->mp_stat_next_pkt, GMF_STAT_NEXT_PKT_NS);
+
+ } else {
+ p->mp_stat_next_pkt_ns = NULL;
+ }
+
+ p->mp_stat_max_delayed_pkt =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_MAX_DELAYED_PKT);
+
+ if (p->mp_stat_max_delayed_pkt) {
+ p->mp_stat_max_delayed_pkt_ns =
+ nthw_register_query_field(p->mp_stat_max_delayed_pkt,
+ GMF_STAT_MAX_DELAYED_PKT_NS);
+
+ } else {
+ p->mp_stat_max_delayed_pkt_ns = NULL;
+ }
+
+ p->mp_ctrl_ifg_tx_now_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_NOW_ALWAYS);
+ p->mp_ctrl_ifg_tx_on_ts_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ALWAYS);
+
+ p->mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ADJUST_ON_SET_CLOCK);
+
+ p->mp_ifg_clock_delta_adjust =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA_ADJUST);
+
+ if (p->mp_ifg_clock_delta_adjust) {
+ p->mp_ifg_clock_delta_adjust_delta =
+ nthw_register_query_field(p->mp_ifg_clock_delta_adjust,
+ GMF_IFG_SET_CLOCK_DELTA_ADJUST_DELTA);
+
+ } else {
+ p->mp_ifg_clock_delta_adjust_delta = NULL;
+ }
+
+ p->mp_ts_inject = nthw_module_query_register(p->mp_mod_gmf, GMF_TS_INJECT);
+
+ if (p->mp_ts_inject) {
+ p->mp_ts_inject_offset =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_OFFSET);
+ p->mp_ts_inject_pos =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_POS);
+
+ } else {
+ p->mp_ts_inject_offset = NULL;
+ p->mp_ts_inject_pos = NULL;
+ }
+
+ return 0;
+}
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable)
+{
+ if (!p->m_administrative_block)
+ nthw_field_set_val_flush32(p->mp_ctrl_enable, enable ? 1 : 0);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 40/73] net/ntnic: sort FPGA registers alphanumerically
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (38 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
` (36 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Beatification commit. It is required for pretty supporting different FPGA
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 364 +++++++++---------
1 file changed, 182 insertions(+), 182 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 6df7208649..e076697a92 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,187 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
+ { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
+ { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
+ { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
+ { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
+ { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
+ { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
+ { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
+ { DBS_RX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
+ { DBS_RX_INIT_BUSY, 1, 8, 0 },
+ { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
+ { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
+ { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
+ { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
+ { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
+ { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
+ { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
+ { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
+ { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
+ { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
+ { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
+ { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
+ { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
+ { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
+ { DBS_TX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
+ { DBS_TX_INIT_BUSY, 1, 8, 0 },
+ { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
+ { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
+ { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
+ { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
+ { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
+ { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
+ { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
+ { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
+ { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
+ { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
+ { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
+ { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
+ { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
+ { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
+ { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_register_init_s dbs_registers[] = {
+ { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
+ { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
+ { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
+ { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
+ { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
+ { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
+ { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
+ { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
+ { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
+ { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
+ { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
+ { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
+ { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
+ { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
+ { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
+ { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
+ { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
+ { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
+ { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
+ { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
+ { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
+ { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
+ { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
+ { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
+ { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
+ { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
+ { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1541,192 +1722,11 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
-static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
- { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
- { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
- { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
- { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
- { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
- { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
- { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
- { DBS_RX_IDLE_BUSY, 1, 8, 0 },
- { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
- { DBS_RX_INIT_BUSY, 1, 8, 0 },
- { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
- { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
- { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
- { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
- { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
- { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
- { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
- { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
- { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
- { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
- { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
- { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
- { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
- { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
- { DBS_TX_IDLE_BUSY, 1, 8, 0 },
- { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
- { DBS_TX_INIT_BUSY, 1, 8, 0 },
- { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
- { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
- { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
- { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
- { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
- { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
- { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
- { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
- { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
- { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
- { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
- { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
- { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
- { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
- { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_register_init_s dbs_registers[] = {
- { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
- { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
- { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
- { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
- { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
- { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
- { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
- { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
- { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
- { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
- { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
- { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
- { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
- { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
- { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
- { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
- { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
- { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
- { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
- { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
- { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
- { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
- { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
- { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
- { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
- { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
- { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
-};
-
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
- { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers},
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
{
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 41/73] net/ntnic: add MOD CSU
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (39 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
` (35 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Checksum Update module updates the checksums of packets
that has been modified in any way.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index e076697a92..efa7b306bc 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,23 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
+ { CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s csu_rcp_data_fields[] = {
+ { CSU_RCP_DATA_IL3_CMD, 2, 5, 0x0000 },
+ { CSU_RCP_DATA_IL4_CMD, 3, 7, 0x0000 },
+ { CSU_RCP_DATA_OL3_CMD, 2, 0, 0x0000 },
+ { CSU_RCP_DATA_OL4_CMD, 3, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s csu_registers[] = {
+ { CSU_RCP_CTRL, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, csu_rcp_ctrl_fields },
+ { CSU_RCP_DATA, 2, 10, NTHW_FPGA_REG_TYPE_WO, 0, 4, csu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
{ DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
{ DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
@@ -1724,6 +1741,7 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
@@ -1919,5 +1937,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 22, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 42/73] net/ntnic: add MOD FLM
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (40 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 43/73] net/ntnic: add HFU module Serhii Iliushyk
` (34 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup and
programming engine which supported exact match lookup in line-rate
of up to hundreds of millions of flows.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 286 +++++++++++++++++-
1 file changed, 284 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efa7b306bc..739cabfb1c 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -468,6 +468,288 @@ static nthw_fpga_register_init_s dbs_registers[] = {
{ DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
};
+static nthw_fpga_field_init_s flm_buf_ctrl_fields[] = {
+ { FLM_BUF_CTRL_INF_AVAIL, 16, 16, 0x0000 },
+ { FLM_BUF_CTRL_LRN_FREE, 16, 0, 0x0000 },
+ { FLM_BUF_CTRL_STA_AVAIL, 16, 32, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_control_fields[] = {
+ { FLM_CONTROL_CALIB_RECALIBRATE, 3, 28, 0 },
+ { FLM_CONTROL_CRCRD, 1, 12, 0x0000 },
+ { FLM_CONTROL_CRCWR, 1, 11, 0x0000 },
+ { FLM_CONTROL_EAB, 5, 18, 0 },
+ { FLM_CONTROL_ENABLE, 1, 0, 0 },
+ { FLM_CONTROL_INIT, 1, 1, 0x0000 },
+ { FLM_CONTROL_LDS, 1, 2, 0x0000 },
+ { FLM_CONTROL_LFS, 1, 3, 0x0000 },
+ { FLM_CONTROL_LIS, 1, 4, 0x0000 },
+ { FLM_CONTROL_PDS, 1, 9, 0x0000 },
+ { FLM_CONTROL_PIS, 1, 10, 0x0000 },
+ { FLM_CONTROL_RBL, 4, 13, 0 },
+ { FLM_CONTROL_RDS, 1, 7, 0x0000 },
+ { FLM_CONTROL_RIS, 1, 8, 0x0000 },
+ { FLM_CONTROL_SPLIT_SDRAM_USAGE, 5, 23, 16 },
+ { FLM_CONTROL_UDS, 1, 5, 0x0000 },
+ { FLM_CONTROL_UIS, 1, 6, 0x0000 },
+ { FLM_CONTROL_WPD, 1, 17, 0 },
+};
+
+static nthw_fpga_field_init_s flm_inf_data_fields[] = {
+ { FLM_INF_DATA_BYTES, 64, 0, 0x0000 }, { FLM_INF_DATA_CAUSE, 3, 224, 0x0000 },
+ { FLM_INF_DATA_EOR, 1, 287, 0x0000 }, { FLM_INF_DATA_ID, 32, 192, 0x0000 },
+ { FLM_INF_DATA_PACKETS, 64, 64, 0x0000 }, { FLM_INF_DATA_TS, 64, 128, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_aps_fields[] = {
+ { FLM_LOAD_APS_APS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_bin_fields[] = {
+ { FLM_LOAD_BIN_BIN, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_lps_fields[] = {
+ { FLM_LOAD_LPS_LPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
+ { FLM_LRN_DATA_ADJ, 32, 480, 0x0000 }, { FLM_LRN_DATA_COLOR, 32, 448, 0x0000 },
+ { FLM_LRN_DATA_DSCP, 6, 698, 0x0000 }, { FLM_LRN_DATA_ENT, 1, 693, 0x0000 },
+ { FLM_LRN_DATA_EOR, 1, 767, 0x0000 }, { FLM_LRN_DATA_FILL, 16, 544, 0x0000 },
+ { FLM_LRN_DATA_FT, 4, 560, 0x0000 }, { FLM_LRN_DATA_FT_MBR, 4, 564, 0x0000 },
+ { FLM_LRN_DATA_FT_MISS, 4, 568, 0x0000 }, { FLM_LRN_DATA_ID, 32, 512, 0x0000 },
+ { FLM_LRN_DATA_KID, 8, 328, 0x0000 }, { FLM_LRN_DATA_MBR_ID1, 28, 572, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID2, 28, 600, 0x0000 }, { FLM_LRN_DATA_MBR_ID3, 28, 628, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID4, 28, 656, 0x0000 }, { FLM_LRN_DATA_NAT_EN, 1, 711, 0x0000 },
+ { FLM_LRN_DATA_NAT_IP, 32, 336, 0x0000 }, { FLM_LRN_DATA_NAT_PORT, 16, 400, 0x0000 },
+ { FLM_LRN_DATA_NOFI, 1, 716, 0x0000 }, { FLM_LRN_DATA_OP, 4, 694, 0x0000 },
+ { FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
+ { FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
+ { FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
+ { FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
+ { FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_prio_fields[] = {
+ { FLM_PRIO_FT0, 4, 4, 1 }, { FLM_PRIO_FT1, 4, 12, 1 }, { FLM_PRIO_FT2, 4, 20, 1 },
+ { FLM_PRIO_FT3, 4, 28, 1 }, { FLM_PRIO_LIMIT0, 4, 0, 0 }, { FLM_PRIO_LIMIT1, 4, 8, 0 },
+ { FLM_PRIO_LIMIT2, 4, 16, 0 }, { FLM_PRIO_LIMIT3, 4, 24, 0 },
+};
+
+static nthw_fpga_field_init_s flm_pst_ctrl_fields[] = {
+ { FLM_PST_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_PST_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_pst_data_fields[] = {
+ { FLM_PST_DATA_BP, 5, 0, 0x0000 },
+ { FLM_PST_DATA_PP, 5, 5, 0x0000 },
+ { FLM_PST_DATA_TP, 5, 10, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_ctrl_fields[] = {
+ { FLM_RCP_CTRL_ADR, 5, 0, 0x0000 },
+ { FLM_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_data_fields[] = {
+ { FLM_RCP_DATA_AUTO_IPV4_MASK, 1, 402, 0x0000 },
+ { FLM_RCP_DATA_BYT_DYN, 5, 387, 0x0000 },
+ { FLM_RCP_DATA_BYT_OFS, 8, 392, 0x0000 },
+ { FLM_RCP_DATA_IPN, 1, 386, 0x0000 },
+ { FLM_RCP_DATA_KID, 8, 377, 0x0000 },
+ { FLM_RCP_DATA_LOOKUP, 1, 0, 0x0000 },
+ { FLM_RCP_DATA_MASK, 320, 57, 0x0000 },
+ { FLM_RCP_DATA_OPN, 1, 385, 0x0000 },
+ { FLM_RCP_DATA_QW0_DYN, 5, 1, 0x0000 },
+ { FLM_RCP_DATA_QW0_OFS, 8, 6, 0x0000 },
+ { FLM_RCP_DATA_QW0_SEL, 2, 14, 0x0000 },
+ { FLM_RCP_DATA_QW4_DYN, 5, 16, 0x0000 },
+ { FLM_RCP_DATA_QW4_OFS, 8, 21, 0x0000 },
+ { FLM_RCP_DATA_SW8_DYN, 5, 29, 0x0000 },
+ { FLM_RCP_DATA_SW8_OFS, 8, 34, 0x0000 },
+ { FLM_RCP_DATA_SW8_SEL, 2, 42, 0x0000 },
+ { FLM_RCP_DATA_SW9_DYN, 5, 44, 0x0000 },
+ { FLM_RCP_DATA_SW9_OFS, 8, 49, 0x0000 },
+ { FLM_RCP_DATA_TXPLM, 2, 400, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scan_fields[] = {
+ { FLM_SCAN_I, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s flm_status_fields[] = {
+ { FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
+ { FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
+ { FLM_STATUS_CALIB_SUCCESS, 3, 0, 0 },
+ { FLM_STATUS_CRCERR, 1, 10, 0x0000 },
+ { FLM_STATUS_CRITICAL, 1, 8, 0x0000 },
+ { FLM_STATUS_EFT_BP, 1, 11, 0x0000 },
+ { FLM_STATUS_IDLE, 1, 7, 0x0000 },
+ { FLM_STATUS_INITDONE, 1, 6, 0x0000 },
+ { FLM_STATUS_PANIC, 1, 9, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_done_fields[] = {
+ { FLM_STAT_AUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_fail_fields[] = {
+ { FLM_STAT_AUL_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_ignore_fields[] = {
+ { FLM_STAT_AUL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_hit_fields[] = {
+ { FLM_STAT_CSH_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_miss_fields[] = {
+ { FLM_STAT_CSH_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_unh_fields[] = {
+ { FLM_STAT_CSH_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_move_fields[] = {
+ { FLM_STAT_CUC_MOVE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_start_fields[] = {
+ { FLM_STAT_CUC_START_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_flows_fields[] = {
+ { FLM_STAT_FLOWS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_done_fields[] = {
+ { FLM_STAT_INF_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_skip_fields[] = {
+ { FLM_STAT_INF_SKIP_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_done_fields[] = {
+ { FLM_STAT_LRN_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_fail_fields[] = {
+ { FLM_STAT_LRN_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_ignore_fields[] = {
+ { FLM_STAT_LRN_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_dis_fields[] = {
+ { FLM_STAT_PCK_DIS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_hit_fields[] = {
+ { FLM_STAT_PCK_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_miss_fields[] = {
+ { FLM_STAT_PCK_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_unh_fields[] = {
+ { FLM_STAT_PCK_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_done_fields[] = {
+ { FLM_STAT_PRB_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_ignore_fields[] = {
+ { FLM_STAT_PRB_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_done_fields[] = {
+ { FLM_STAT_REL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_ignore_fields[] = {
+ { FLM_STAT_REL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_sta_done_fields[] = {
+ { FLM_STAT_STA_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_tul_done_fields[] = {
+ { FLM_STAT_TUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_done_fields[] = {
+ { FLM_STAT_UNL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_ignore_fields[] = {
+ { FLM_STAT_UNL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_sta_data_fields[] = {
+ { FLM_STA_DATA_EOR, 1, 95, 0x0000 }, { FLM_STA_DATA_ID, 32, 0, 0x0000 },
+ { FLM_STA_DATA_LDS, 1, 32, 0x0000 }, { FLM_STA_DATA_LFS, 1, 33, 0x0000 },
+ { FLM_STA_DATA_LIS, 1, 34, 0x0000 }, { FLM_STA_DATA_PDS, 1, 39, 0x0000 },
+ { FLM_STA_DATA_PIS, 1, 40, 0x0000 }, { FLM_STA_DATA_RDS, 1, 37, 0x0000 },
+ { FLM_STA_DATA_RIS, 1, 38, 0x0000 }, { FLM_STA_DATA_UDS, 1, 35, 0x0000 },
+ { FLM_STA_DATA_UIS, 1, 36, 0x0000 },
+};
+
+static nthw_fpga_register_init_s flm_registers[] = {
+ { FLM_BUF_CTRL, 14, 48, NTHW_FPGA_REG_TYPE_RW, 0, 3, flm_buf_ctrl_fields },
+ { FLM_CONTROL, 0, 31, NTHW_FPGA_REG_TYPE_MIXED, 134217728, 18, flm_control_fields },
+ { FLM_INF_DATA, 16, 288, NTHW_FPGA_REG_TYPE_RO, 0, 6, flm_inf_data_fields },
+ { FLM_LOAD_APS, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_aps_fields },
+ { FLM_LOAD_BIN, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_load_bin_fields },
+ { FLM_LOAD_LPS, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_lps_fields },
+ { FLM_LRN_DATA, 15, 768, NTHW_FPGA_REG_TYPE_WO, 0, 34, flm_lrn_data_fields },
+ { FLM_PRIO, 6, 32, NTHW_FPGA_REG_TYPE_WO, 269488144, 8, flm_prio_fields },
+ { FLM_PST_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_pst_ctrl_fields },
+ { FLM_PST_DATA, 13, 15, NTHW_FPGA_REG_TYPE_WO, 0, 3, flm_pst_data_fields },
+ { FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
+ { FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
+ { FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
+ { FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
+ { FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
+ { FLM_STAT_AUL_IGNORE, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_ignore_fields },
+ { FLM_STAT_CSH_HIT, 52, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_hit_fields },
+ { FLM_STAT_CSH_MISS, 53, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_miss_fields },
+ { FLM_STAT_CSH_UNH, 54, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_unh_fields },
+ { FLM_STAT_CUC_MOVE, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_move_fields },
+ { FLM_STAT_CUC_START, 55, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_start_fields },
+ { FLM_STAT_FLOWS, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_flows_fields },
+ { FLM_STAT_INF_DONE, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_done_fields },
+ { FLM_STAT_INF_SKIP, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_skip_fields },
+ { FLM_STAT_LRN_DONE, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_done_fields },
+ { FLM_STAT_LRN_FAIL, 34, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_fail_fields },
+ { FLM_STAT_LRN_IGNORE, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_ignore_fields },
+ { FLM_STAT_PCK_DIS, 51, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_dis_fields },
+ { FLM_STAT_PCK_HIT, 48, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_hit_fields },
+ { FLM_STAT_PCK_MISS, 49, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_miss_fields },
+ { FLM_STAT_PCK_UNH, 50, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_unh_fields },
+ { FLM_STAT_PRB_DONE, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_done_fields },
+ { FLM_STAT_PRB_IGNORE, 40, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_ignore_fields },
+ { FLM_STAT_REL_DONE, 37, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_done_fields },
+ { FLM_STAT_REL_IGNORE, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_ignore_fields },
+ { FLM_STAT_STA_DONE, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_sta_done_fields },
+ { FLM_STAT_TUL_DONE, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_tul_done_fields },
+ { FLM_STAT_UNL_DONE, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_done_fields },
+ { FLM_STAT_UNL_IGNORE, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_ignore_fields },
+ { FLM_STA_DATA, 17, 96, NTHW_FPGA_REG_TYPE_RO, 0, 11, flm_sta_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1743,6 +2025,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
+ { MOD_FLM, 0, MOD_FLM, 0, 25, NTHW_FPGA_BUS_TYPE_RAB1, 1280, 43, flm_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
@@ -1817,7 +2100,6 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
- { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
@@ -1937,5 +2219,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 43/73] net/ntnic: add HFU module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (41 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 44/73] net/ntnic: add IFR module Serhii Iliushyk
` (33 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Header Field Update module updates protocol fields
if the packets have been changed,
for example length fields and next protocol fields.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 739cabfb1c..82068746b3 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -919,6 +919,41 @@ static nthw_fpga_register_init_s gpio_phy_registers[] = {
{ GPIO_PHY_GPIO, 1, 10, NTHW_FPGA_REG_TYPE_RW, 17, 10, gpio_phy_gpio_fields },
};
+static nthw_fpga_field_init_s hfu_rcp_ctrl_fields[] = {
+ { HFU_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { HFU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s hfu_rcp_data_fields[] = {
+ { HFU_RCP_DATA_LEN_A_ADD_DYN, 5, 15, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_ADD_OFS, 8, 20, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_OL4LEN, 1, 1, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_DYN, 5, 2, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_OFS, 8, 7, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_SUB_DYN, 5, 28, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_WR, 1, 0, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_DYN, 5, 47, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_OFS, 8, 52, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_DYN, 5, 34, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_OFS, 8, 39, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_SUB_DYN, 5, 60, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_WR, 1, 33, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_DYN, 5, 79, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_OFS, 8, 84, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_DYN, 5, 66, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_OFS, 8, 71, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_SUB_DYN, 5, 92, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_WR, 1, 65, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_DYN, 5, 98, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_OFS, 8, 103, 0x0000 },
+ { HFU_RCP_DATA_TTL_WR, 1, 97, 0x0000 },
+};
+
+static nthw_fpga_register_init_s hfu_registers[] = {
+ { HFU_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, hfu_rcp_ctrl_fields },
+ { HFU_RCP_DATA, 1, 111, NTHW_FPGA_REG_TYPE_WO, 0, 22, hfu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s hif_build_time_fields[] = {
{ HIF_BUILD_TIME_TIME, 32, 0, 1726740521 },
};
@@ -2033,6 +2068,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
gpio_phy_registers
},
+ { MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
@@ -2219,5 +2255,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 44/73] net/ntnic: add IFR module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (42 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 43/73] net/ntnic: add HFU module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
` (32 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 82068746b3..509e1f6860 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1095,6 +1095,44 @@ static nthw_fpga_register_init_s hsh_registers[] = {
{ HSH_RCP_DATA, 1, 743, NTHW_FPGA_REG_TYPE_WO, 0, 23, hsh_rcp_data_fields },
};
+static nthw_fpga_field_init_s ifr_counters_ctrl_fields[] = {
+ { IFR_COUNTERS_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_COUNTERS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_counters_data_fields[] = {
+ { IFR_COUNTERS_DATA_DROP, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_ctrl_fields[] = {
+ { IFR_DF_BUF_CTRL_AVAILABLE, 11, 0, 0x0000 },
+ { IFR_DF_BUF_CTRL_MTU_PROFILE, 16, 11, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_data_fields[] = {
+ { IFR_DF_BUF_DATA_FIFO_DAT, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_ctrl_fields[] = {
+ { IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_data_fields[] = {
+ { IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 }, { IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 }, { IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ifr_registers[] = {
+ { IFR_COUNTERS_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_counters_ctrl_fields },
+ { IFR_COUNTERS_DATA, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_counters_data_fields },
+ { IFR_DF_BUF_CTRL, 2, 27, NTHW_FPGA_REG_TYPE_RO, 0, 2, ifr_df_buf_ctrl_fields },
+ { IFR_DF_BUF_DATA, 3, 128, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_df_buf_data_fields },
+ { IFR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_rcp_ctrl_fields },
+ { IFR_RCP_DATA, 1, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, ifr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s iic_adr_fields[] = {
{ IIC_ADR_SLV_ADR, 7, 1, 0 },
};
@@ -2071,6 +2109,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
+ { MOD_IFR, 0, MOD_IFR, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 9984, 6, ifr_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
{ MOD_IIC, 1, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 896, 22, iic_registers },
{ MOD_IIC, 2, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 24832, 22, iic_registers },
@@ -2255,5 +2294,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 45/73] net/ntnic: add MAC Rx module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (43 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 44/73] net/ntnic: add IFR module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
` (31 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 61 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +++++++++
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 509e1f6860..eecd6342c0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1774,6 +1774,63 @@ static nthw_fpga_register_init_s mac_pcs_registers[] = {
},
};
+static nthw_fpga_field_init_s mac_rx_bad_fcs_fields[] = {
+ { MAC_RX_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_fragment_fields[] = {
+ { MAC_RX_FRAGMENT_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_bad_fcs_fields[] = {
+ { MAC_RX_PACKET_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_small_fields[] = {
+ { MAC_RX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_bytes_fields[] = {
+ { MAC_RX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_bytes_fields[] = {
+ { MAC_RX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_packets_fields[] = {
+ { MAC_RX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_packets_fields[] = {
+ { MAC_RX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_undersize_fields[] = {
+ { MAC_RX_UNDERSIZE_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_rx_registers[] = {
+ { MAC_RX_BAD_FCS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_bad_fcs_fields },
+ { MAC_RX_FRAGMENT, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_fragment_fields },
+ {
+ MAC_RX_PACKET_BAD_FCS, 7, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_packet_bad_fcs_fields
+ },
+ { MAC_RX_PACKET_SMALL, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_packet_small_fields },
+ { MAC_RX_TOTAL_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_bytes_fields },
+ {
+ MAC_RX_TOTAL_GOOD_BYTES, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_bytes_fields
+ },
+ {
+ MAC_RX_TOTAL_GOOD_PACKETS, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_packets_fields
+ },
+ { MAC_RX_TOTAL_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_packets_fields },
+ { MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2123,6 +2180,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_MAC_PCS, 1, MOD_MAC_PCS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB2, 11776, 44,
mac_pcs_registers
},
+ { MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
+ { MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2294,5 +2353,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index b6be02f45e..5983ba7095 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -29,6 +29,7 @@
#define MOD_IIC (0x7629cddbUL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
+#define MOD_MAC_RX (0x6347b490UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -43,7 +44,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (14)
+#define MOD_IDX_COUNT (31)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 3560eeda7d..5ebbec6c7e 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -30,6 +30,7 @@
#include "nthw_fpga_reg_defs_ins.h"
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
+#include "nthw_fpga_reg_defs_mac_rx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
new file mode 100644
index 0000000000..3829c10f3b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_RX_
+#define _NTHW_FPGA_REG_DEFS_MAC_RX_
+
+/* MAC_RX */
+#define MAC_RX_BAD_FCS (0xca07f618UL)
+#define MAC_RX_BAD_FCS_COUNT (0x11d5ba0eUL)
+#define MAC_RX_FRAGMENT (0x5363b736UL)
+#define MAC_RX_FRAGMENT_COUNT (0xf664c9aUL)
+#define MAC_RX_PACKET_BAD_FCS (0x4cb8b34cUL)
+#define MAC_RX_PACKET_BAD_FCS_COUNT (0xb6701e28UL)
+#define MAC_RX_PACKET_SMALL (0xed318a65UL)
+#define MAC_RX_PACKET_SMALL_COUNT (0x72095ec7UL)
+#define MAC_RX_TOTAL_BYTES (0x831313e2UL)
+#define MAC_RX_TOTAL_BYTES_COUNT (0xe5d8be59UL)
+#define MAC_RX_TOTAL_GOOD_BYTES (0x912c2d1cUL)
+#define MAC_RX_TOTAL_GOOD_BYTES_COUNT (0x63bb5f3eUL)
+#define MAC_RX_TOTAL_GOOD_PACKETS (0xfbb4f497UL)
+#define MAC_RX_TOTAL_GOOD_PACKETS_COUNT (0xae9d21b0UL)
+#define MAC_RX_TOTAL_PACKETS (0xb0ea3730UL)
+#define MAC_RX_TOTAL_PACKETS_COUNT (0x532c885dUL)
+#define MAC_RX_UNDERSIZE (0xb6fa4bdbUL)
+#define MAC_RX_UNDERSIZE_COUNT (0x471945ffUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_RX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 46/73] net/ntnic: add MAC Tx module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (44 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
` (30 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Media Access Control Transmit module contains counters
that keep track on transmitted packets.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 ++++++++++
4 files changed, 61 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index eecd6342c0..7a2f5aec32 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1831,6 +1831,40 @@ static nthw_fpga_register_init_s mac_rx_registers[] = {
{ MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
};
+static nthw_fpga_field_init_s mac_tx_packet_small_fields[] = {
+ { MAC_TX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_bytes_fields[] = {
+ { MAC_TX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_bytes_fields[] = {
+ { MAC_TX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_packets_fields[] = {
+ { MAC_TX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_packets_fields[] = {
+ { MAC_TX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_tx_registers[] = {
+ { MAC_TX_PACKET_SMALL, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_packet_small_fields },
+ { MAC_TX_TOTAL_BYTES, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_bytes_fields },
+ {
+ MAC_TX_TOTAL_GOOD_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_bytes_fields
+ },
+ {
+ MAC_TX_TOTAL_GOOD_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_packets_fields
+ },
+ { MAC_TX_TOTAL_PACKETS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_packets_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2182,6 +2216,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
},
{ MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
{ MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
+ { MOD_MAC_TX, 0, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 11264, 5, mac_tx_registers },
+ { MOD_MAC_TX, 1, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12800, 5, mac_tx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2353,5 +2389,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 5983ba7095..f4a913f3d2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -30,6 +30,7 @@
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
+#define MOD_MAC_TX (0x351d1316UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -44,7 +45,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (31)
+#define MOD_IDX_COUNT (32)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 5ebbec6c7e..7741aa563f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -31,6 +31,7 @@
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
#include "nthw_fpga_reg_defs_mac_rx.h"
+#include "nthw_fpga_reg_defs_mac_tx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
new file mode 100644
index 0000000000..6a77d449ae
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_TX_
+#define _NTHW_FPGA_REG_DEFS_MAC_TX_
+
+/* MAC_TX */
+#define MAC_TX_PACKET_SMALL (0xcfcb5e97UL)
+#define MAC_TX_PACKET_SMALL_COUNT (0x84345b01UL)
+#define MAC_TX_TOTAL_BYTES (0x7bd15854UL)
+#define MAC_TX_TOTAL_BYTES_COUNT (0x61fb238cUL)
+#define MAC_TX_TOTAL_GOOD_BYTES (0xcf0260fUL)
+#define MAC_TX_TOTAL_GOOD_BYTES_COUNT (0x8603398UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS (0xd89f151UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS_COUNT (0x12c47c77UL)
+#define MAC_TX_TOTAL_PACKETS (0xe37b5ed4UL)
+#define MAC_TX_TOTAL_PACKETS_COUNT (0x21ddd2ddUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_TX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 47/73] net/ntnic: add RPP LR module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (45 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
` (29 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The RX Packet Process for Local Retransmit module can add bytes
in the FPGA TX pipeline, which is needed when the packet increases in size.
Note, this makes room for packet expansion,
but the actual expansion is done by the modules.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 32 ++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 7a2f5aec32..33437da204 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2138,6 +2138,35 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
+ { RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_data_fields[] = {
+ { RPP_LR_IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_ctrl_fields[] = {
+ { RPP_LR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_data_fields[] = {
+ { RPP_LR_RCP_DATA_EXP, 14, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpp_lr_registers[] = {
+ { RPP_LR_IFR_RCP_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_ifr_rcp_ctrl_fields },
+ { RPP_LR_IFR_RCP_DATA, 3, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, rpp_lr_ifr_rcp_data_fields },
+ { RPP_LR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_rcp_ctrl_fields },
+ { RPP_LR_RCP_DATA, 1, 14, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpp_lr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s rst9563_ctrl_fields[] = {
{ RST9563_CTRL_PTP_MMCM_CLKSEL, 1, 2, 1 },
{ RST9563_CTRL_TS_CLKSEL, 1, 1, 1 },
@@ -2230,6 +2259,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_QSL, 0, MOD_QSL, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 1792, 8, qsl_registers },
{ MOD_RAC, 0, MOD_RAC, 3, 0, NTHW_FPGA_BUS_TYPE_PCI, 8192, 14, rac_registers },
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
+ { MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
};
@@ -2389,5 +2419,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 48/73] net/ntnic: add MOD SLC LR
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (46 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
` (28 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 33437da204..0f69f89527 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2214,6 +2214,23 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
+static nthw_fpga_field_init_s slc_rcp_ctrl_fields[] = {
+ { SLC_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { SLC_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s slc_rcp_data_fields[] = {
+ { SLC_RCP_DATA_HEAD_DYN, 5, 1, 0x0000 }, { SLC_RCP_DATA_HEAD_OFS, 8, 6, 0x0000 },
+ { SLC_RCP_DATA_HEAD_SLC_EN, 1, 0, 0x0000 }, { SLC_RCP_DATA_PCAP, 1, 35, 0x0000 },
+ { SLC_RCP_DATA_TAIL_DYN, 5, 15, 0x0000 }, { SLC_RCP_DATA_TAIL_OFS, 15, 20, 0x0000 },
+ { SLC_RCP_DATA_TAIL_SLC_EN, 1, 14, 0x0000 },
+};
+
+static nthw_fpga_register_init_s slc_registers[] = {
+ { SLC_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, slc_rcp_ctrl_fields },
+ { SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2261,6 +2278,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
+ { MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2419,5 +2437,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index f4a913f3d2..865dd6a084 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,11 +41,12 @@
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
+#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (32)
+#define MOD_IDX_COUNT (33)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 49/73] net/ntnic: add Tx CPY module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (47 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
` (27 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Copy module writes data to packet fields based on the lookup
performed by the FLM module.
This is used for NAT and can support other actions based
on the RTE action MODIFY_FIELD.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 204 +++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 205 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 0f69f89527..60fd748ea2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,207 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s cpy_packet_reader0_ctrl_fields[] = {
+ { CPY_PACKET_READER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_PACKET_READER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_packet_reader0_data_fields[] = {
+ { CPY_PACKET_READER0_DATA_DYN, 5, 10, 0x0000 },
+ { CPY_PACKET_READER0_DATA_OFS, 10, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_ctrl_fields[] = {
+ { CPY_WRITER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_data_fields[] = {
+ { CPY_WRITER0_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER0_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER0_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER0_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER0_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_ctrl_fields[] = {
+ { CPY_WRITER0_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_data_fields[] = {
+ { CPY_WRITER0_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_ctrl_fields[] = {
+ { CPY_WRITER1_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_data_fields[] = {
+ { CPY_WRITER1_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER1_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER1_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER1_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER1_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_ctrl_fields[] = {
+ { CPY_WRITER1_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_data_fields[] = {
+ { CPY_WRITER1_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_ctrl_fields[] = {
+ { CPY_WRITER2_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_data_fields[] = {
+ { CPY_WRITER2_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER2_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER2_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER2_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER2_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_ctrl_fields[] = {
+ { CPY_WRITER2_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_data_fields[] = {
+ { CPY_WRITER2_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_ctrl_fields[] = {
+ { CPY_WRITER3_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_data_fields[] = {
+ { CPY_WRITER3_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER3_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER3_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER3_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER3_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_ctrl_fields[] = {
+ { CPY_WRITER3_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_data_fields[] = {
+ { CPY_WRITER3_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_ctrl_fields[] = {
+ { CPY_WRITER4_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_data_fields[] = {
+ { CPY_WRITER4_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER4_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER4_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER4_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER4_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_ctrl_fields[] = {
+ { CPY_WRITER4_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_data_fields[] = {
+ { CPY_WRITER4_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_ctrl_fields[] = {
+ { CPY_WRITER5_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_data_fields[] = {
+ { CPY_WRITER5_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER5_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER5_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER5_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER5_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_ctrl_fields[] = {
+ { CPY_WRITER5_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_data_fields[] = {
+ { CPY_WRITER5_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s cpy_registers[] = {
+ {
+ CPY_PACKET_READER0_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_ctrl_fields
+ },
+ {
+ CPY_PACKET_READER0_DATA, 25, 15, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_data_fields
+ },
+ { CPY_WRITER0_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer0_ctrl_fields },
+ { CPY_WRITER0_DATA, 1, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer0_data_fields },
+ {
+ CPY_WRITER0_MASK_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer0_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER0_MASK_DATA, 3, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer0_mask_data_fields
+ },
+ { CPY_WRITER1_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer1_ctrl_fields },
+ { CPY_WRITER1_DATA, 5, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer1_data_fields },
+ {
+ CPY_WRITER1_MASK_CTRL, 6, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer1_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER1_MASK_DATA, 7, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer1_mask_data_fields
+ },
+ { CPY_WRITER2_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer2_ctrl_fields },
+ { CPY_WRITER2_DATA, 9, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer2_data_fields },
+ {
+ CPY_WRITER2_MASK_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer2_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER2_MASK_DATA, 11, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer2_mask_data_fields
+ },
+ { CPY_WRITER3_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer3_ctrl_fields },
+ { CPY_WRITER3_DATA, 13, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer3_data_fields },
+ {
+ CPY_WRITER3_MASK_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer3_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER3_MASK_DATA, 15, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer3_mask_data_fields
+ },
+ { CPY_WRITER4_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer4_ctrl_fields },
+ { CPY_WRITER4_DATA, 17, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer4_data_fields },
+ {
+ CPY_WRITER4_MASK_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer4_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER4_MASK_DATA, 19, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer4_mask_data_fields
+ },
+ { CPY_WRITER5_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer5_ctrl_fields },
+ { CPY_WRITER5_DATA, 21, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer5_data_fields },
+ {
+ CPY_WRITER5_MASK_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer5_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER5_MASK_DATA, 23, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer5_mask_data_fields
+ },
+};
+
static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
{ CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2279,6 +2480,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
+ { MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2437,5 +2639,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 865dd6a084..0ab5ae0310 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -15,6 +15,7 @@
#define MOD_UNKNOWN (0L)/* Unknown/uninitialized - keep this as the first element */
#define MOD_CAT (0x30b447c2UL)
+#define MOD_CPY (0x1ddc186fUL)
#define MOD_CSU (0x3f470787UL)
#define MOD_DBS (0x80b29727UL)
#define MOD_FLM (0xe7ba53a4UL)
@@ -46,7 +47,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (33)
+#define MOD_IDX_COUNT (34)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 50/73] net/ntnic: add Tx INS module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (48 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
` (26 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Inserter module injects zeros into an offset of a packet,
effectively expanding the packet.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 19 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 60fd748ea2..c8841b1dc2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1457,6 +1457,22 @@ static nthw_fpga_register_init_s iic_registers[] = {
{ IIC_TX_FIFO_OCY, 69, 4, NTHW_FPGA_REG_TYPE_RO, 0, 1, iic_tx_fifo_ocy_fields },
};
+static nthw_fpga_field_init_s ins_rcp_ctrl_fields[] = {
+ { INS_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { INS_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ins_rcp_data_fields[] = {
+ { INS_RCP_DATA_DYN, 5, 0, 0x0000 },
+ { INS_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { INS_RCP_DATA_OFS, 10, 5, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ins_registers[] = {
+ { INS_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ins_rcp_ctrl_fields },
+ { INS_RCP_DATA, 1, 23, NTHW_FPGA_REG_TYPE_WO, 0, 3, ins_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s km_cam_ctrl_fields[] = {
{ KM_CAM_CTRL_ADR, 13, 0, 0x0000 },
{ KM_CAM_CTRL_CNT, 16, 16, 0x0000 },
@@ -2481,6 +2497,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
+ { MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2639,5 +2656,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0ab5ae0310..8c0c727e16 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -28,6 +28,7 @@
#define MOD_I2CM (0x93bc7780UL)
#define MOD_IFR (0x9b01f1e6UL)
#define MOD_IIC (0x7629cddbUL)
+#define MOD_INS (0x24df4b78UL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
@@ -47,7 +48,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (34)
+#define MOD_IDX_COUNT (35)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 51/73] net/ntnic: add Tx RPL module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (49 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
` (25 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Replacer module can replace a range of bytes in a packet.
The replacing data is stored in a table in the module
and will often contain tunnel data.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index c8841b1dc2..a3d9f94fc6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2355,6 +2355,44 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpl_ext_ctrl_fields[] = {
+ { RPL_EXT_CTRL_ADR, 10, 0, 0x0000 },
+ { RPL_EXT_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_ext_data_fields[] = {
+ { RPL_EXT_DATA_RPL_PTR, 12, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_ctrl_fields[] = {
+ { RPL_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPL_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_data_fields[] = {
+ { RPL_RCP_DATA_DYN, 5, 0, 0x0000 }, { RPL_RCP_DATA_ETH_TYPE_WR, 1, 36, 0x0000 },
+ { RPL_RCP_DATA_EXT_PRIO, 1, 35, 0x0000 }, { RPL_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { RPL_RCP_DATA_OFS, 10, 5, 0x0000 }, { RPL_RCP_DATA_RPL_PTR, 12, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_ctrl_fields[] = {
+ { RPL_RPL_CTRL_ADR, 12, 0, 0x0000 },
+ { RPL_RPL_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_data_fields[] = {
+ { RPL_RPL_DATA_VALUE, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpl_registers[] = {
+ { RPL_EXT_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_ext_ctrl_fields },
+ { RPL_EXT_DATA, 3, 12, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_ext_data_fields },
+ { RPL_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rcp_ctrl_fields },
+ { RPL_RCP_DATA, 1, 37, NTHW_FPGA_REG_TYPE_WO, 0, 6, rpl_rcp_data_fields },
+ { RPL_RPL_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rpl_ctrl_fields },
+ { RPL_RPL_DATA, 5, 128, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_rpl_data_fields },
+};
+
static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
{ RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2498,6 +2536,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
+ { MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2656,5 +2695,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 8c0c727e16..2b059d98ff 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -40,6 +40,7 @@
#define MOD_QSL (0x448ed859UL)
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
+#define MOD_RPL (0x6de535c3UL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
@@ -48,7 +49,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (35)
+#define MOD_IDX_COUNT (36)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (50 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 23:12 ` Stephen Hemminger
2024-10-21 21:04 ` [PATCH v1 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
` (24 subsequent siblings)
76 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, dvo-plv
Update incorrect alignment
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
Cc: dvo-plv@napatech.com
---
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
index bde0fed273..70a48b6cdf 100644
--- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
+++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <rte_common.h>
#include <unistd.h>
#include "ntos_drv.h"
@@ -67,20 +68,20 @@
} \
} while (0)
-struct __rte_aligned(8) virtq_avail {
+struct __rte_packed __rte_aligned(1) virtq_avail {
uint16_t flags;
uint16_t idx;
uint16_t ring[]; /* Queue Size */
};
-struct __rte_aligned(8) virtq_used_elem {
+struct __rte_packed __rte_aligned(1) virtq_used_elem {
/* Index of start of used descriptor chain. */
uint32_t id;
/* Total length of the descriptor chain which was used (written to) */
uint32_t len;
};
-struct __rte_aligned(8) virtq_used {
+struct __rte_packed __rte_aligned(1) virtq_used {
uint16_t flags;
uint16_t idx;
struct virtq_used_elem ring[]; /* Queue Size */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 53/73] net/ntnic: enable RSS feature
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (51 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 54/73] net/ntnic: add statistics API Serhii Iliushyk
` (23 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Enable receive side scaling
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 3 +
drivers/net/ntnic/include/create_elements.h | 1 +
drivers/net/ntnic/include/flow_api.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 6 ++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 77 +++++++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 73 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 212 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4cb9509742..e5d5abd0ed 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -10,6 +10,8 @@ Link status = Y
Queue start/stop = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
Linux = Y
x86-64 = Y
@@ -37,3 +39,4 @@ port_id = Y
queue = Y
raw_decap = Y
raw_encap = Y
+rss = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 70e6cad195..eaa578e72a 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,7 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_rss flow_rss;
struct flow_action_raw_encap encap;
struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 2e96fa5bed..4a1525f237 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -114,6 +114,8 @@ struct flow_nic_dev {
struct flow_eth_dev *eth_base;
pthread_mutex_t mtx;
+ /* RSS hashing configuration */
+ struct nt_eth_rss_conf rss_conf;
/* next NIC linked list */
struct flow_nic_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a2cb9a68b4..ea27f96865 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1071,6 +1071,12 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+
+ /*
+ * Other
+ */
+ .hw_mod_hsh_rcp_flush = hw_mod_hsh_rcp_flush,
+ .flow_nic_set_hasher_fields = flow_nic_set_hasher_fields,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0cb9451390..afb1c13f57 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -602,6 +602,49 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RSS", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_rss rss_tmp;
+ const struct rte_flow_action_rss *rss =
+ memcpy_mask_if(&rss_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_rss));
+
+ if (rss->key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: RSS hash key length %u exceeds maximum value %u",
+ rss->key_len, MAX_RSS_KEY_LEN);
+ flow_nic_set_error(ERR_RSS_TOO_LONG_KEY, error);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < rss->queue_num; ++i) {
+ int hw_id = rx_queue_idx_to_hw_id(dev, rss->queue[i]);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+ }
+
+ fd->hsh.func = rss->func;
+ fd->hsh.types = rss->types;
+ fd->hsh.key = rss->key;
+ fd->hsh.key_len = rss->key_len;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RSS func: %d, types: 0x%" PRIX64 ", key_len: %d",
+ dev, rss->func, rss->types, rss->key_len);
+
+ fd->full_offload = 0;
+ *num_queues += rss->queue_num;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MARK:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bfca8f28b1..1b25621537 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -214,6 +214,14 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_rx_pktlen = HW_MAX_PKT_LEN;
dev_info->max_mtu = MAX_MTU;
+ if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
+ dev_info->hash_key_size = MAX_RSS_KEY_LEN;
+
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ);
+ }
+
if (internals->p_drv) {
dev_info->max_rx_queues = internals->nb_rx_queues;
dev_info->max_tx_queues = internals->nb_tx_queues;
@@ -1372,6 +1380,73 @@ promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
return 0;
}
+static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+ struct nt_eth_rss_conf tmp_rss_conf = { 0 };
+ const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
+ int res = 0;
+
+ if (rss_conf->rss_key != NULL) {
+ if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, NTNIC,
+ "ERROR: - RSS hash key length %u exceeds maximum value %u",
+ rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ return -1;
+ }
+
+ rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+ }
+
+ tmp_rss_conf.algorithm = rss_conf->algorithm;
+
+ tmp_rss_conf.rss_hf = rss_conf->rss_hf;
+ res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
+
+ if (res == 0) {
+ flow_filter_ops->hw_mod_hsh_rcp_flush(&ndev->be, hsh_idx, 1);
+ rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
+
+ } else {
+ NT_LOG(ERR, NTNIC, "ERROR: - RSS hash update failed with error %i", res);
+ }
+
+ return res;
+}
+
+static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+
+ rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
+
+ rss_conf->rss_hf = ndev->rss_conf.rss_hf;
+
+ /*
+ * copy full stored key into rss_key and pad it with
+ * zeros up to rss_key_len / MAX_RSS_KEY_LEN
+ */
+ if (rss_conf->rss_key != NULL) {
+ int key_len = rss_conf->rss_key_len < MAX_RSS_KEY_LEN ? rss_conf->rss_key_len
+ : MAX_RSS_KEY_LEN;
+ memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
+ rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
+ rss_conf->rss_key_len = key_len;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
@@ -1395,6 +1470,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
+ .rss_hash_update = eth_dev_rss_hash_update,
+ .rss_hash_conf_get = rss_hash_conf_get,
};
/*
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 87b26bd315..4962ab8d5a 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -317,6 +317,79 @@ int create_action_elements_inline(struct cnv_action_s *action,
* Non-compatible actions handled here
*/
switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)actions[aidx].conf;
+
+ switch (rss->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ;
+
+ if (rte_is_power_of_2(rss->queue_num) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - for Toeplitz the number of queues must be power of two");
+ return -1;
+ }
+
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT:
+ case RTE_ETH_HASH_FUNCTION_MAX:
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported function: %u",
+ rss->func);
+ return -1;
+ }
+
+ uint64_t tmp_rss_types = 0;
+
+ switch (rss->level) {
+ case 1:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_OUTERMOST;
+ break;
+
+ case 2:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_INNERMOST;
+ break;
+
+ case 0:
+ /* keep level mask specified at types */
+ action->flow_rss.types = rss->types;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported level: %u",
+ rss->level);
+ return -1;
+ }
+
+ action->flow_rss.level = 0;
+ action->flow_rss.key_len = rss->key_len;
+ action->flow_rss.queue_num = rss->queue_num;
+ action->flow_rss.key = rss->key;
+ action->flow_rss.queue = rss->queue;
+ action->flow_actions[aidx].conf = &action->flow_rss;
+ }
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
const struct rte_flow_action_raw_decap *decap =
(const struct rte_flow_action_raw_decap *)actions[aidx]
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 12baa13800..e40ed9b949 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -316,6 +316,13 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+
+ /*
+ * Other
+ */
+ int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+ int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 54/73] net/ntnic: add statistics API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (52 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 55/73] net/ntnic: add rpf module Serhii Iliushyk
` (22 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Statistics init, setup, get, reset APIs and their
implementation were added.
Statistics fpga defines were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 192 +++++++++
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 149 +++++++
drivers/net/ntnic/include/ntos_drv.h | 9 +
.../ntnic/include/stream_binary_flow_api.h | 5 +
drivers/net/ntnic/meson.build | 3 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 1 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 10 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 370 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 40 ++
drivers/net/ntnic/ntnic_ethdev.c | 119 +++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 132 +++++++
drivers/net/ntnic/ntnic_mod_reg.c | 30 ++
drivers/net/ntnic/ntnic_mod_reg.h | 17 +
drivers/net/ntnic/ntutil/nt_util.h | 1 +
21 files changed, 1119 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_adapter.c b/drivers/net/ntnic/adapter/nt4ga_adapter.c
index d9e6716c30..fa72dfda8d 100644
--- a/drivers/net/ntnic/adapter/nt4ga_adapter.c
+++ b/drivers/net/ntnic/adapter/nt4ga_adapter.c
@@ -212,19 +212,26 @@ static int nt4ga_adapter_init(struct adapter_info_s *p_adapter_info)
}
}
- nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
- if (p_nthw_rmc == NULL) {
- NT_LOG(ERR, NTNIC, "Failed to allocate memory for RMC module");
- return -1;
- }
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
- res = nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
- if (res) {
- NT_LOG(ERR, NTNIC, "Failed to initialize RMC module");
- return -1;
- }
+ if (nt4ga_stat_ops != NULL) {
+ /* Nt4ga Stat init/setup */
+ res = nt4ga_stat_ops->nt4ga_stat_init(p_adapter_info);
+
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot initialize the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+
+ res = nt4ga_stat_ops->nt4ga_stat_setup(p_adapter_info);
- nthw_rmc_unblock(p_nthw_rmc, false);
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot setup the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+ }
return 0;
}
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
new file mode 100644
index 0000000000..0e20f3ea45
--- /dev/null
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -0,0 +1,192 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+#include "nt_util.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "nthw_fpga_param_defs.h"
+#include "nt4ga_adapter.h"
+#include "ntnic_nim.h"
+#include "flow_filter.h"
+#include "ntnic_mod_reg.h"
+
+#define DEFAULT_MAX_BPS_SPEED 100e9
+
+static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
+{
+ const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
+ fpga_info_t *fpga_info = &p_adapter_info->fpga_info;
+ nthw_fpga_t *p_fpga = fpga_info->mp_fpga;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+
+ if (p_nt4ga_stat) {
+ memset(p_nt4ga_stat, 0, sizeof(nt4ga_stat_t));
+
+ } else {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ {
+ nthw_stat_t *p_nthw_stat = nthw_stat_new();
+
+ if (!p_nthw_stat) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ if (nthw_rmc_init(NULL, p_fpga, 0) == 0) {
+ nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
+
+ if (!p_nthw_rmc) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
+ p_nt4ga_stat->mp_nthw_rmc = p_nthw_rmc;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rmc = NULL;
+ }
+
+ p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
+ nthw_stat_init(p_nthw_stat, p_fpga, 0);
+
+ p_nt4ga_stat->mn_rx_host_buffers = p_nthw_stat->m_nb_rx_host_buffers;
+ p_nt4ga_stat->mn_tx_host_buffers = p_nthw_stat->m_nb_tx_host_buffers;
+
+ p_nt4ga_stat->mn_rx_ports = p_nthw_stat->m_nb_rx_ports;
+ p_nt4ga_stat->mn_tx_ports = p_nthw_stat->m_nb_tx_ports;
+ }
+
+ return 0;
+}
+
+static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
+{
+ const int n_physical_adapter_no = p_adapter_info->adapter_no;
+ (void)n_physical_adapter_no;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+
+ /* Allocate and map memory for fpga statistics */
+ {
+ uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
+ sizeof(p_nthw_stat->mp_timestamp));
+ struct nt_dma_s *p_dma;
+ int numa_node = p_adapter_info->fpga_info.numa_node;
+
+ /* FPGA needs a 16K alignment on Statistics */
+ p_dma = nt_dma_alloc(n_stat_size, 0x4000, numa_node);
+
+ if (!p_dma) {
+ NT_LOG_DBGX(ERR, NTNIC, "p_dma alloc failed");
+ return -1;
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%x @%d %" PRIx64 " %" PRIx64, n_stat_size, numa_node,
+ p_dma->addr, p_dma->iova);
+
+ NT_LOG(DBG, NTNIC,
+ "DMA: Physical adapter %02d, PA = 0x%016" PRIX64 " DMA = 0x%016" PRIX64
+ " size = 0x%" PRIX32 "",
+ n_physical_adapter_no, p_dma->iova, p_dma->addr, n_stat_size);
+
+ p_nt4ga_stat->p_stat_dma_virtual = (uint32_t *)p_dma->addr;
+ p_nt4ga_stat->n_stat_size = n_stat_size;
+ p_nt4ga_stat->p_stat_dma = p_dma;
+
+ memset(p_nt4ga_stat->p_stat_dma_virtual, 0xaa, n_stat_size);
+ nthw_stat_set_dma_address(p_nthw_stat, p_dma->iova,
+ p_nt4ga_stat->p_stat_dma_virtual);
+ }
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+
+ p_nt4ga_stat->mp_stat_structs_color =
+ calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_color) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_hb =
+ calloc(p_nt4ga_stat->mn_rx_host_buffers + p_nt4ga_stat->mn_tx_host_buffers,
+ sizeof(struct host_buffer_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_hb) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_rx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_tx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_port_load =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
+
+ if (!p_nt4ga_stat->mp_port_load) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+#ifdef NIM_TRIGGER
+ uint64_t max_bps_speed = nt_get_max_link_speed(p_adapter_info->nt4ga_link.speed_capa);
+
+ if (max_bps_speed == 0)
+ max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+
+#else
+ uint64_t max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+ NT_LOG(ERR, NTNIC, "NIM module not included");
+#endif
+
+ for (int p = 0; p < NUM_ADAPTER_PORTS_MAX; p++) {
+ p_nt4ga_stat->mp_port_load[p].rx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].tx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].rx_pps_max = max_bps_speed / (8 * (20 + 64));
+ p_nt4ga_stat->mp_port_load[p].tx_pps_max = max_bps_speed / (8 * (20 + 64));
+ }
+
+ memset(p_nt4ga_stat->a_stat_structs_color_base, 0,
+ sizeof(struct color_counters) * NT_MAX_COLOR_FLOW_STATS);
+ p_nt4ga_stat->last_timestamp = 0;
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ return 0;
+}
+
+static struct nt4ga_stat_ops ops = {
+ .nt4ga_stat_init = nt4ga_stat_init,
+ .nt4ga_stat_setup = nt4ga_stat_setup,
+};
+
+void nt4ga_stat_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "Stat module was initialized");
+ register_nt4ga_stat_ops(&ops);
+}
diff --git a/drivers/net/ntnic/include/common_adapter_defs.h b/drivers/net/ntnic/include/common_adapter_defs.h
new file mode 100644
index 0000000000..6ed9121f0f
--- /dev/null
+++ b/drivers/net/ntnic/include/common_adapter_defs.h
@@ -0,0 +1,15 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _COMMON_ADAPTER_DEFS_H_
+#define _COMMON_ADAPTER_DEFS_H_
+
+/*
+ * Declarations shared by NT adapter types.
+ */
+#define NUM_ADAPTER_MAX (8)
+#define NUM_ADAPTER_PORTS_MAX (128)
+
+#endif /* _COMMON_ADAPTER_DEFS_H_ */
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index eaa578e72a..1456977837 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -46,6 +46,10 @@ struct rte_flow {
uint32_t flow_stat_id;
+ uint64_t stat_pkts;
+ uint64_t stat_bytes;
+ uint8_t stat_tcp_flags;
+
uint16_t caller_id;
};
diff --git a/drivers/net/ntnic/include/nt4ga_adapter.h b/drivers/net/ntnic/include/nt4ga_adapter.h
index 809135f130..fef79ce358 100644
--- a/drivers/net/ntnic/include/nt4ga_adapter.h
+++ b/drivers/net/ntnic/include/nt4ga_adapter.h
@@ -6,6 +6,7 @@
#ifndef _NT4GA_ADAPTER_H_
#define _NT4GA_ADAPTER_H_
+#include "ntnic_stat.h"
#include "nt4ga_link.h"
typedef struct hw_info_s {
@@ -30,6 +31,7 @@ typedef struct hw_info_s {
#include "ntnic_stat.h"
typedef struct adapter_info_s {
+ struct nt4ga_stat_s nt4ga_stat;
struct nt4ga_filter_s nt4ga_filter;
struct nt4ga_link_s nt4ga_link;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8ebdd98db0..1135e9a539 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -15,6 +15,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
+ pthread_mutex_t stat_lck;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 148088fe1d..2aee3f8425 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -6,6 +6,155 @@
#ifndef NTNIC_STAT_H_
#define NTNIC_STAT_H_
+#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_fpga_model.h"
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+struct nthw_stat {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_stat;
+ int mn_instance;
+
+ int mn_stat_layout_version;
+
+ bool mb_has_tx_stats;
+
+ int m_nb_phy_ports;
+ int m_nb_nim_ports;
+
+ int m_nb_rx_ports;
+ int m_nb_tx_ports;
+
+ int m_nb_rx_host_buffers;
+ int m_nb_tx_host_buffers;
+
+ int m_dbs_present;
+
+ int m_rx_port_replicate;
+
+ int m_nb_color_counters;
+
+ int m_nb_rx_hb_counters;
+ int m_nb_tx_hb_counters;
+
+ int m_nb_rx_port_counters;
+ int m_nb_tx_port_counters;
+
+ int m_nb_counters;
+
+ int m_nb_rpp_per_ps;
+
+ nthw_field_t *mp_fld_dma_ena;
+ nthw_field_t *mp_fld_cnt_clear;
+
+ nthw_field_t *mp_fld_tx_disable;
+
+ nthw_field_t *mp_fld_cnt_freeze;
+
+ nthw_field_t *mp_fld_stat_toggle_missed;
+
+ nthw_field_t *mp_fld_dma_lsb;
+ nthw_field_t *mp_fld_dma_msb;
+
+ nthw_field_t *mp_fld_load_bin;
+ nthw_field_t *mp_fld_load_bps_rx0;
+ nthw_field_t *mp_fld_load_bps_rx1;
+ nthw_field_t *mp_fld_load_bps_tx0;
+ nthw_field_t *mp_fld_load_bps_tx1;
+ nthw_field_t *mp_fld_load_pps_rx0;
+ nthw_field_t *mp_fld_load_pps_rx1;
+ nthw_field_t *mp_fld_load_pps_tx0;
+ nthw_field_t *mp_fld_load_pps_tx1;
+
+ uint64_t m_stat_dma_physical;
+ uint32_t *mp_stat_dma_virtual;
+
+ uint64_t *mp_timestamp;
+};
+
+typedef struct nthw_stat nthw_stat_t;
+typedef struct nthw_stat nthw_stat;
+
+struct color_counters {
+ uint64_t color_packets;
+ uint64_t color_bytes;
+ uint8_t tcp_flags;
+};
+
+struct host_buffer_counters {
+};
+
+struct port_load_counters {
+ uint64_t rx_pps_max;
+ uint64_t tx_pps_max;
+ uint64_t rx_bps_max;
+ uint64_t tx_bps_max;
+};
+
+struct port_counters_v2 {
+};
+
+struct flm_counters_v1 {
+};
+
+struct nt4ga_stat_s {
+ nthw_stat_t *mp_nthw_stat;
+ nthw_rmc_t *mp_nthw_rmc;
+ struct nt_dma_s *p_stat_dma;
+ uint32_t *p_stat_dma_virtual;
+ uint32_t n_stat_size;
+
+ uint64_t last_timestamp;
+
+ int mn_rx_host_buffers;
+ int mn_tx_host_buffers;
+
+ int mn_rx_ports;
+ int mn_tx_ports;
+
+ struct color_counters *mp_stat_structs_color;
+ /* For calculating increments between stats polls */
+ struct color_counters a_stat_structs_color_base[NT_MAX_COLOR_FLOW_STATS];
+
+ /* Port counters for inline */
+ struct {
+ struct port_counters_v2 *mp_stat_structs_port_rx;
+ struct port_counters_v2 *mp_stat_structs_port_tx;
+ } cap;
+
+ struct host_buffer_counters *mp_stat_structs_hb;
+ struct port_load_counters *mp_port_load;
+
+ /* Rx/Tx totals: */
+ uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
+
+ uint64_t a_port_rx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ /* Base is for calculating increments between statistics reads */
+ uint64_t a_port_rx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_packets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_packets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_drops_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_drops_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+};
+
+typedef struct nt4ga_stat_s nt4ga_stat_t;
+
+nthw_stat_t *nthw_stat_new(void);
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_stat_delete(nthw_stat_t *p);
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual);
+int nthw_stat_trigger(nthw_stat_t *p);
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 8fd577dfe3..7b3c8ff3d6 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -57,6 +57,9 @@ struct __rte_cache_aligned ntnic_rx_queue {
struct flow_queue_id_s queue; /* queue info - user id and hw queue index */
struct rte_mempool *mb_pool; /* mbuf memory pool */
uint16_t buf_size; /* Size of data area in mbuf */
+ unsigned long rx_pkts; /* Rx packet statistics */
+ unsigned long rx_bytes; /* Rx bytes statistics */
+ unsigned long err_pkts; /* Rx error packet statistics */
int enabled; /* Enabling/disabling of this queue */
struct hwq_s hwq;
@@ -80,6 +83,9 @@ struct __rte_cache_aligned ntnic_tx_queue {
int rss_target_id;
uint32_t port; /* Tx port for this queue */
+ unsigned long tx_pkts; /* Tx packet statistics */
+ unsigned long tx_bytes; /* Tx bytes statistics */
+ unsigned long err_pkts; /* Tx error packet stat */
int enabled; /* Enabling/disabling of this queue */
enum fpga_info_profile profile; /* Inline / Capture */
};
@@ -95,6 +101,7 @@ struct pmd_internals {
/* Offset of the VF from the PF */
uint8_t vf_offset;
uint32_t port;
+ uint32_t port_id;
nt_meta_port_type_t type;
struct flow_queue_id_s vpq[MAX_QUEUES];
unsigned int vpq_nb_vq;
@@ -107,6 +114,8 @@ struct pmd_internals {
struct rte_ether_addr eth_addrs[NUM_MAC_ADDRS_PER_PORT];
/* Multicast ethernet (MAC) addresses. */
struct rte_ether_addr mc_addrs[NUM_MULTICAST_ADDRS_PER_PORT];
+ uint64_t last_stat_rtc;
+ uint64_t rx_missed;
struct pmd_internals *next;
};
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index e5fe686d99..4ce1561033 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,7 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include <rte_ether.h>
#include "rte_flow.h"
#include "rte_flow_driver.h"
@@ -44,6 +45,10 @@
#define FLOW_MAX_QUEUES 128
#define RAW_ENCAP_DECAP_ELEMS_MAX 16
+
+extern uint64_t rte_tsc_freq;
+extern rte_spinlock_t hwlock;
+
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 92167d24e4..216341bb11 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -25,10 +25,12 @@ includes = [
# all sources
sources = files(
'adapter/nt4ga_adapter.c',
+ 'adapter/nt4ga_stat/nt4ga_stat.c',
'dbsconfig/ntnic_dbsconfig.c',
'link_mgmt/link_100g/nt4ga_link_100g.c',
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
+ 'ntnic_filter/ntnic_filter.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
@@ -48,6 +50,7 @@ sources = files(
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
+ 'nthw/stat/nthw_stat.c',
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index 2345820bdc..b239752674 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -44,6 +44,7 @@ typedef struct nthw_rmc nthw_rmc;
nthw_rmc_t *nthw_rmc_new(void);
int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 4a01424c24..748519aeb4 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,16 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+void nthw_rmc_block(nthw_rmc_t *p)
+{
+ /* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
+ if (!p->mb_administrative_block) {
+ nthw_field_set_flush(p->mp_fld_ctrl_block_stat_drop);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_keep_alive);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_mac_port);
+ }
+}
+
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary)
{
uint32_t n_block_mask = ~0U << (b_is_secondary ? p->mn_nims : p->mn_ports);
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
new file mode 100644
index 0000000000..6adcd2e090
--- /dev/null
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -0,0 +1,370 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "nt_util.h"
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "ntnic_stat.h"
+
+#include <malloc.h>
+
+nthw_stat_t *nthw_stat_new(void)
+{
+ nthw_stat_t *p = malloc(sizeof(nthw_stat_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_stat_t));
+
+ return p;
+}
+
+void nthw_stat_delete(nthw_stat_t *p)
+{
+ if (p)
+ free(p);
+}
+
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ uint64_t n_module_version_packed64 = -1;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_STA, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: STAT %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_stat = mod;
+
+ n_module_version_packed64 = nthw_module_get_version_packed64(p->mp_mod_stat);
+ NT_LOG(DBG, NTHW, "%s: STAT %d: version=0x%08lX", p_adapter_id_str, p->mn_instance,
+ n_module_version_packed64);
+
+ {
+ nthw_register_t *p_reg;
+ /* STA_CFG register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_CFG);
+ p->mp_fld_dma_ena = nthw_register_get_field(p_reg, STA_CFG_DMA_ENA);
+ p->mp_fld_cnt_clear = nthw_register_get_field(p_reg, STA_CFG_CNT_CLEAR);
+
+ /* CFG: fields NOT available from v. 3 */
+ p->mp_fld_tx_disable = nthw_register_query_field(p_reg, STA_CFG_TX_DISABLE);
+ p->mp_fld_cnt_freeze = nthw_register_query_field(p_reg, STA_CFG_CNT_FRZ);
+
+ /* STA_STATUS register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_STATUS);
+ p->mp_fld_stat_toggle_missed =
+ nthw_register_get_field(p_reg, STA_STATUS_STAT_TOGGLE_MISSED);
+
+ /* HOST_ADR registers */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_LSB);
+ p->mp_fld_dma_lsb = nthw_register_get_field(p_reg, STA_HOST_ADR_LSB_LSB);
+
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_MSB);
+ p->mp_fld_dma_msb = nthw_register_get_field(p_reg, STA_HOST_ADR_MSB_MSB);
+
+ /* Binning cycles */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BIN);
+
+ if (p_reg) {
+ p->mp_fld_load_bin = nthw_register_get_field(p_reg, STA_LOAD_BIN_BIN);
+
+ /* Bandwidth load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx0 = NULL;
+ }
+
+ /* Bandwidth load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx1 = NULL;
+ }
+
+ /* Bandwidth load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx0 = NULL;
+ }
+
+ /* Bandwidth load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx1 = NULL;
+ }
+
+ /* Packet load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx0 = NULL;
+ }
+
+ /* Packet load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx1 = NULL;
+ }
+
+ /* Packet load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx0 = NULL;
+ }
+
+ /* Packet load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+
+ } else {
+ p->mp_fld_load_bin = NULL;
+ p->mp_fld_load_bps_rx0 = NULL;
+ p->mp_fld_load_bps_rx1 = NULL;
+ p->mp_fld_load_bps_tx0 = NULL;
+ p->mp_fld_load_bps_tx1 = NULL;
+ p->mp_fld_load_pps_rx0 = NULL;
+ p->mp_fld_load_pps_rx1 = NULL;
+ p->mp_fld_load_pps_tx0 = NULL;
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+ }
+
+ /* Params */
+ p->m_nb_nim_ports = nthw_fpga_get_product_param(p_fpga, NT_NIMS, 0);
+ p->m_nb_phy_ports = nthw_fpga_get_product_param(p_fpga, NT_PHY_PORTS, 0);
+
+ /* VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_STA_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_PORTS, 0);
+ }
+ }
+
+ p->m_nb_rpp_per_ps = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+
+ p->m_nb_tx_ports = nthw_fpga_get_product_param(p_fpga, NT_TX_PORTS, 0);
+ p->m_rx_port_replicate = nthw_fpga_get_product_param(p_fpga, NT_RX_PORT_REPLICATE, 0);
+
+ /* VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_STA_COLORS, 64) * 2;
+
+ if (p->m_nb_color_counters == 0) {
+ /* non-VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_CAT_FUNCS, 0) * 2;
+ }
+
+ p->m_nb_rx_host_buffers = nthw_fpga_get_product_param(p_fpga, NT_QUEUES, 0);
+ p->m_nb_tx_host_buffers = p->m_nb_rx_host_buffers;
+
+ p->m_dbs_present = nthw_fpga_get_product_param(p_fpga, NT_DBS_PRESENT, 0);
+
+ p->m_nb_rx_hb_counters = (p->m_nb_rx_host_buffers * (6 + 2 *
+ (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ?
+ p->m_dbs_present : 0)));
+
+ p->m_nb_tx_hb_counters = 0;
+
+ p->m_nb_rx_port_counters = 42 +
+ 2 * (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ? p->m_dbs_present : 0);
+ p->m_nb_tx_port_counters = 0;
+
+ p->m_nb_counters =
+ p->m_nb_color_counters + p->m_nb_rx_hb_counters + p->m_nb_tx_hb_counters;
+
+ p->mn_stat_layout_version = 0;
+
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 9)) {
+ p->mn_stat_layout_version = 7;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 8)) {
+ p->mn_stat_layout_version = 6;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->mn_stat_layout_version = 5;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 4)) {
+ p->mn_stat_layout_version = 4;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 3)) {
+ p->mn_stat_layout_version = 3;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 2)) {
+ p->mn_stat_layout_version = 2;
+
+ } else if (n_module_version_packed64 > VERSION_PACKED64(0, 0)) {
+ p->mn_stat_layout_version = 1;
+
+ } else {
+ p->mn_stat_layout_version = 0;
+ NT_LOG(ERR, NTHW, "%s: unknown module_version 0x%08lX layout=%d",
+ p_adapter_id_str, n_module_version_packed64, p->mn_stat_layout_version);
+ }
+
+ assert(p->mn_stat_layout_version);
+
+ /* STA module 0.2+ adds IPF counters per port (Rx feature) */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 2))
+ p->m_nb_rx_port_counters += 6;
+
+ /* STA module 0.3+ adds TX stats */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3) || p->m_nb_tx_ports >= 1)
+ p->mb_has_tx_stats = true;
+
+ /* STA module 0.3+ adds TX stat counters */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3))
+ p->m_nb_tx_port_counters += 22;
+
+ /* STA module 0.4+ adds TX drop event counter */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 4))
+ p->m_nb_tx_port_counters += 1; /* TX drop event counter */
+
+ /*
+ * STA module 0.6+ adds pkt filter drop octets+pkts, retransmit and
+ * duplicate counters
+ */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->m_nb_rx_port_counters += 4;
+ p->m_nb_tx_port_counters += 1;
+ }
+
+ p->m_nb_counters += (p->m_nb_rx_ports * p->m_nb_rx_port_counters);
+
+ if (p->mb_has_tx_stats)
+ p->m_nb_counters += (p->m_nb_tx_ports * p->m_nb_tx_port_counters);
+
+ /* Output params (debug) */
+ NT_LOG(DBG, NTHW, "%s: nims=%d rxports=%d txports=%d rxrepl=%d colors=%d queues=%d",
+ p_adapter_id_str, p->m_nb_nim_ports, p->m_nb_rx_ports, p->m_nb_tx_ports,
+ p->m_rx_port_replicate, p->m_nb_color_counters, p->m_nb_rx_host_buffers);
+ NT_LOG(DBG, NTHW, "%s: hbs=%d hbcounters=%d rxcounters=%d txcounters=%d",
+ p_adapter_id_str, p->m_nb_rx_host_buffers, p->m_nb_rx_hb_counters,
+ p->m_nb_rx_port_counters, p->m_nb_tx_port_counters);
+ NT_LOG(DBG, NTHW, "%s: layout=%d", p_adapter_id_str, p->mn_stat_layout_version);
+ NT_LOG(DBG, NTHW, "%s: counters=%d (0x%X)", p_adapter_id_str, p->m_nb_counters,
+ p->m_nb_counters);
+
+ /* Init */
+ if (p->mp_fld_tx_disable)
+ nthw_field_set_flush(p->mp_fld_tx_disable);
+
+ nthw_field_update_register(p->mp_fld_cnt_clear);
+ nthw_field_set_flush(p->mp_fld_cnt_clear);
+ nthw_field_clr_flush(p->mp_fld_cnt_clear);
+
+ nthw_field_update_register(p->mp_fld_stat_toggle_missed);
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_clr_flush(p->mp_fld_dma_ena);
+ nthw_field_update_register(p->mp_fld_dma_ena);
+
+ /* Set the sliding windows size for port load */
+ if (p->mp_fld_load_bin) {
+ uint32_t rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ uint32_t bin =
+ (uint32_t)(((PORT_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) -
+ 1ULL);
+ nthw_field_set_val_flush32(p->mp_fld_load_bin, bin);
+ }
+
+ return 0;
+}
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual)
+{
+ assert(p_stat_dma_virtual);
+ p->mp_timestamp = NULL;
+
+ p->m_stat_dma_physical = stat_dma_physical;
+ p->mp_stat_dma_virtual = p_stat_dma_virtual;
+
+ memset(p->mp_stat_dma_virtual, 0, (p->m_nb_counters * sizeof(uint32_t)));
+
+ nthw_field_set_val_flush32(p->mp_fld_dma_msb,
+ (uint32_t)((p->m_stat_dma_physical >> 32) & 0xffffffff));
+ nthw_field_set_val_flush32(p->mp_fld_dma_lsb,
+ (uint32_t)(p->m_stat_dma_physical & 0xffffffff));
+
+ p->mp_timestamp = (uint64_t *)(p->mp_stat_dma_virtual + p->m_nb_counters);
+ NT_LOG(DBG, NTHW,
+ "stat_dma_physical=%" PRIX64 " p_stat_dma_virtual=%" PRIX64
+ " mp_timestamp=%" PRIX64 "", p->m_stat_dma_physical,
+ (uint64_t)p->mp_stat_dma_virtual, (uint64_t)p->mp_timestamp);
+ *p->mp_timestamp = (uint64_t)(int64_t)-1;
+ return 0;
+}
+
+int nthw_stat_trigger(nthw_stat_t *p)
+{
+ int n_toggle_miss = nthw_field_get_updated(p->mp_fld_stat_toggle_missed);
+
+ if (n_toggle_miss)
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ if (p->mp_timestamp)
+ *p->mp_timestamp = -1; /* Clear old ts */
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_set_flush(p->mp_fld_dma_ena);
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 2b059d98ff..ddc144dc02 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -46,6 +46,7 @@
#define MOD_SDC (0xd2369530UL)
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
+#define MOD_STA (0x76fae64dUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7741aa563f..8f196f885f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -45,6 +45,7 @@
#include "nthw_fpga_reg_defs_sdc.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
+#include "nthw_fpga_reg_defs_sta.h"
#include "nthw_fpga_reg_defs_tx_cpy.h"
#include "nthw_fpga_reg_defs_tx_ins.h"
#include "nthw_fpga_reg_defs_tx_rpl.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
new file mode 100644
index 0000000000..640ffcbc52
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -0,0 +1,40 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_STA_
+#define _NTHW_FPGA_REG_DEFS_STA_
+
+/* STA */
+#define STA_CFG (0xcecaf9f4UL)
+#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
+#define STA_CFG_CNT_FRZ (0x8c27a596UL)
+#define STA_CFG_DMA_ENA (0x940dbacUL)
+#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_HOST_ADR_LSB (0xde569336UL)
+#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
+#define STA_HOST_ADR_MSB (0xdf94f901UL)
+#define STA_HOST_ADR_MSB_MSB (0x114798c8UL)
+#define STA_LOAD_BIN (0x2e842591UL)
+#define STA_LOAD_BIN_BIN (0x1a2b942eUL)
+#define STA_LOAD_BPS_RX_0 (0xbf8f4595UL)
+#define STA_LOAD_BPS_RX_0_BPS (0x41647781UL)
+#define STA_LOAD_BPS_RX_1 (0xc8887503UL)
+#define STA_LOAD_BPS_RX_1_BPS (0x7c045e31UL)
+#define STA_LOAD_BPS_TX_0 (0x9ae41a49UL)
+#define STA_LOAD_BPS_TX_0_BPS (0x870b7e06UL)
+#define STA_LOAD_BPS_TX_1 (0xede32adfUL)
+#define STA_LOAD_BPS_TX_1_BPS (0xba6b57b6UL)
+#define STA_LOAD_PPS_RX_0 (0x811173c3UL)
+#define STA_LOAD_PPS_RX_0_PPS (0xbee573fcUL)
+#define STA_LOAD_PPS_RX_1 (0xf6164355UL)
+#define STA_LOAD_PPS_RX_1_PPS (0x83855a4cUL)
+#define STA_LOAD_PPS_TX_0 (0xa47a2c1fUL)
+#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
+#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
+#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_STATUS (0x91c5c51cUL)
+#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_STA_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 1b25621537..86876ecda6 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -65,6 +65,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+uint64_t rte_tsc_freq;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -88,7 +90,7 @@ static const struct rte_pci_id nthw_pci_id_map[] = {
static const struct sg_ops_s *sg_ops;
-static rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
+rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
/*
* Store and get adapter info
@@ -156,6 +158,102 @@ get_pdrv_from_pci(struct rte_pci_addr addr)
return p_drv;
}
+static int dpdk_stats_collect(struct pmd_internals *internals, struct rte_eth_stats *stats)
+{
+ const struct ntnic_filter_ops *ntnic_filter_ops = get_ntnic_filter_ops();
+
+ if (ntnic_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "ntnic_filter_ops uninitialized");
+ return -1;
+ }
+
+ unsigned int i;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t rx_total = 0;
+ uint64_t rx_total_b = 0;
+ uint64_t tx_total = 0;
+ uint64_t tx_total_b = 0;
+ uint64_t tx_err_total = 0;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || !stats || if_index < 0 ||
+ if_index > NUM_ADAPTER_PORTS_MAX) {
+ NT_LOG_DBGX(WRN, NTNIC, "error exit");
+ return -1;
+ }
+
+ /*
+ * Pull the latest port statistic numbers (Rx/Tx pkts and bytes)
+ * Return values are in the "internals->rxq_scg[]" and "internals->txq_scg[]" arrays
+ */
+ ntnic_filter_ops->poll_statistics(internals);
+
+ memset(stats, 0, sizeof(*stats));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_rx_queues; i++) {
+ stats->q_ipackets[i] = internals->rxq_scg[i].rx_pkts;
+ stats->q_ibytes[i] = internals->rxq_scg[i].rx_bytes;
+ rx_total += stats->q_ipackets[i];
+ rx_total_b += stats->q_ibytes[i];
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_tx_queues; i++) {
+ stats->q_opackets[i] = internals->txq_scg[i].tx_pkts;
+ stats->q_obytes[i] = internals->txq_scg[i].tx_bytes;
+ stats->q_errors[i] = internals->txq_scg[i].err_pkts;
+ tx_total += stats->q_opackets[i];
+ tx_total_b += stats->q_obytes[i];
+ tx_err_total += stats->q_errors[i];
+ }
+
+ stats->imissed = internals->rx_missed;
+ stats->ipackets = rx_total;
+ stats->ibytes = rx_total_b;
+ stats->opackets = tx_total;
+ stats->obytes = tx_total_b;
+ stats->oerrors = tx_err_total;
+
+ return 0;
+}
+
+static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s *p_nt_drv,
+ int n_intf_no)
+{
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ unsigned int i;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /* Rx */
+ for (i = 0; i < internals->nb_rx_queues; i++) {
+ internals->rxq_scg[i].rx_pkts = 0;
+ internals->rxq_scg[i].rx_bytes = 0;
+ internals->rxq_scg[i].err_pkts = 0;
+ }
+
+ internals->rx_missed = 0;
+
+ /* Tx */
+ for (i = 0; i < internals->nb_tx_queues; i++) {
+ internals->txq_scg[i].tx_pkts = 0;
+ internals->txq_scg[i].tx_bytes = 0;
+ internals->txq_scg[i].err_pkts = 0;
+ }
+
+ p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
+
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
static int
eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
{
@@ -194,6 +292,23 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return 0;
}
+static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ dpdk_stats_collect(internals, stats);
+ return 0;
+}
+
+static int eth_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ const int if_index = internals->n_intf_no;
+ dpdk_stats_reset(internals, p_nt_drv, if_index);
+ return 0;
+}
+
static int
eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info)
{
@@ -1455,6 +1570,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_set_link_down = eth_dev_set_link_down,
.dev_close = eth_dev_close,
.link_update = eth_link_update,
+ .stats_get = eth_stats_get,
+ .stats_reset = eth_stats_reset,
.dev_infos_get = eth_dev_infos_get,
.fw_version_get = eth_fw_version_get,
.rx_queue_setup = eth_rx_scg_queue_setup,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 4962ab8d5a..e2fce02afa 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -8,11 +8,19 @@
#include "create_elements.h"
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
+#include "ntos_drv.h"
#define MAX_RTE_FLOWS 8192
+#define MAX_COLOR_FLOW_STATS 0x400
#define NT_MAX_COLOR_FLOW_STATS 0x400
+#if (MAX_COLOR_FLOW_STATS != NT_MAX_COLOR_FLOW_STATS)
+#error Difference in COLOR_FLOW_STATS. Please synchronize the defines.
+#endif
+
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
@@ -668,6 +676,9 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
/* Cleanup recorded flows */
nt_flows[flow].used = 0;
nt_flows[flow].caller_id = 0;
+ nt_flows[flow].stat_bytes = 0UL;
+ nt_flows[flow].stat_pkts = 0UL;
+ nt_flows[flow].stat_tcp_flags = 0;
}
}
@@ -707,6 +718,127 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int poll_statistics(struct pmd_internals *internals)
+{
+ int flow;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t last_stat_rtc = 0;
+
+ if (!p_nt4ga_stat || if_index < 0 || if_index > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ assert(rte_tsc_freq > 0);
+
+ rte_spinlock_lock(&hwlock);
+
+ uint64_t now_rtc = rte_get_tsc_cycles();
+
+ /*
+ * Check per port max once a second
+ * if more than a second since last stat read, do a new one
+ */
+ if ((now_rtc - internals->last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ return 0;
+ }
+
+ internals->last_stat_rtc = now_rtc;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /*
+ * Add the RX statistics increments since last time we polled.
+ * (No difference if physical or virtual port)
+ */
+ internals->rxq_scg[0].rx_pkts += p_nt4ga_stat->a_port_rx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_packets_base[if_index];
+ internals->rxq_scg[0].rx_bytes += p_nt4ga_stat->a_port_rx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_octets_base[if_index];
+ internals->rxq_scg[0].err_pkts += 0;
+ internals->rx_missed += p_nt4ga_stat->a_port_rx_drops_total[if_index] -
+ p_nt4ga_stat->a_port_rx_drops_base[if_index];
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_rx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_packets_total[if_index];
+ p_nt4ga_stat->a_port_rx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_octets_total[if_index];
+ p_nt4ga_stat->a_port_rx_drops_base[if_index] =
+ p_nt4ga_stat->a_port_rx_drops_total[if_index];
+
+ /* Tx (here we must distinguish between physical and virtual ports) */
+ if (internals->type == PORT_TYPE_PHYSICAL) {
+ /* Add the statistics increments since last time we polled */
+ internals->txq_scg[0].tx_pkts += p_nt4ga_stat->a_port_tx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_packets_base[if_index];
+ internals->txq_scg[0].tx_bytes += p_nt4ga_stat->a_port_tx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_octets_base[if_index];
+ internals->txq_scg[0].err_pkts += 0;
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_tx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_packets_total[if_index];
+ p_nt4ga_stat->a_port_tx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_octets_total[if_index];
+ }
+
+ /* Globally only once a second */
+ if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return 0;
+ }
+
+ last_stat_rtc = now_rtc;
+
+ /* All color counter are global, therefore only 1 pmd must update them */
+ const struct color_counters *p_color_counters = p_nt4ga_stat->mp_stat_structs_color;
+ struct color_counters *p_color_counters_base = p_nt4ga_stat->a_stat_structs_color_base;
+ uint64_t color_packets_accumulated, color_bytes_accumulated;
+
+ for (flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used) {
+ unsigned int color = nt_flows[flow].flow_stat_id;
+
+ if (color < NT_MAX_COLOR_FLOW_STATS) {
+ color_packets_accumulated = p_color_counters[color].color_packets;
+ nt_flows[flow].stat_pkts +=
+ (color_packets_accumulated -
+ p_color_counters_base[color].color_packets);
+
+ nt_flows[flow].stat_tcp_flags |= p_color_counters[color].tcp_flags;
+
+ color_bytes_accumulated = p_color_counters[color].color_bytes;
+ nt_flows[flow].stat_bytes +=
+ (color_bytes_accumulated -
+ p_color_counters_base[color].color_bytes);
+
+ /* Update the counter bases */
+ p_color_counters_base[color].color_packets =
+ color_packets_accumulated;
+ p_color_counters_base[color].color_bytes = color_bytes_accumulated;
+ }
+ }
+ }
+
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
+static const struct ntnic_filter_ops ntnic_filter_ops = {
+ .poll_statistics = poll_statistics,
+};
+
+void ntnic_filter_init(void)
+{
+ register_ntnic_filter_ops(&ntnic_filter_ops);
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 593b56bf5b..355e2032b1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,21 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+static const struct ntnic_filter_ops *ntnic_filter_ops;
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
+{
+ ntnic_filter_ops = ops;
+}
+
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void)
+{
+ if (ntnic_filter_ops == NULL)
+ ntnic_filter_init();
+
+ return ntnic_filter_ops;
+}
+
static struct link_ops_s *link_100g_ops;
void register_100g_link_ops(struct link_ops_s *ops)
@@ -47,6 +62,21 @@ const struct port_ops *get_port_ops(void)
return port_ops;
}
+static const struct nt4ga_stat_ops *nt4ga_stat_ops;
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops)
+{
+ nt4ga_stat_ops = ops;
+}
+
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void)
+{
+ if (nt4ga_stat_ops == NULL)
+ nt4ga_stat_ops_init();
+
+ return nt4ga_stat_ops;
+}
+
static const struct adapter_ops *adapter_ops;
void register_adapter_ops(const struct adapter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e40ed9b949..30b9afb7d3 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -111,6 +111,14 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+struct ntnic_filter_ops {
+ int (*poll_statistics)(struct pmd_internals *internals);
+};
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops);
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void);
+void ntnic_filter_init(void);
+
struct link_ops_s {
int (*link_init)(struct adapter_info_s *p_adapter_info, nthw_fpga_t *p_fpga);
};
@@ -175,6 +183,15 @@ void register_port_ops(const struct port_ops *ops);
const struct port_ops *get_port_ops(void);
void port_init(void);
+struct nt4ga_stat_ops {
+ int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+};
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void);
+void nt4ga_stat_ops_init(void);
+
struct adapter_ops {
int (*init)(struct adapter_info_s *p_adapter_info);
int (*deinit)(struct adapter_info_s *p_adapter_info);
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index a482fb43ad..f2eccf3501 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -22,6 +22,7 @@
* The windows size must max be 3 min in order to
* prevent overflow.
*/
+#define PORT_LOAD_WINDOWS_SIZE 2ULL
#define FLM_LOAD_WINDOWS_SIZE 2ULL
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 55/73] net/ntnic: add rpf module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (53 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 54/73] net/ntnic: add statistics API Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 56/73] net/ntnic: add statistics poll Serhii Iliushyk
` (21 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Receive Port FIFO module controls the small FPGA FIFO
that packets are stored in before they enter the packet processor pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 25 +++-
drivers/net/ntnic/include/ntnic_stat.h | 2 +
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +++++++
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 ++++++++++++++++++
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 ++
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +++
10 files changed, 228 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 0e20f3ea45..f733fd5459 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -11,6 +11,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nim.h"
#include "flow_filter.h"
+#include "ntnic_stat.h"
#include "ntnic_mod_reg.h"
#define DEFAULT_MAX_BPS_SPEED 100e9
@@ -43,7 +44,7 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
if (!p_nthw_rmc) {
nthw_stat_delete(p_nthw_stat);
- NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ NT_LOG(ERR, NTNIC, "%s: ERROR rmc allocation", p_adapter_id_str);
return -1;
}
@@ -54,6 +55,22 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
p_nt4ga_stat->mp_nthw_rmc = NULL;
}
+ if (nthw_rpf_init(NULL, p_fpga, p_adapter_info->adapter_no) == 0) {
+ nthw_rpf_t *p_nthw_rpf = nthw_rpf_new();
+
+ if (!p_nthw_rpf) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rpf_init(p_nthw_rpf, p_fpga, p_adapter_info->adapter_no);
+ p_nt4ga_stat->mp_nthw_rpf = p_nthw_rpf;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rpf = NULL;
+ }
+
p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
nthw_stat_init(p_nthw_stat, p_fpga, 0);
@@ -77,6 +94,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_block(p_nt4ga_stat->mp_nthw_rpf);
+
/* Allocate and map memory for fpga statistics */
{
uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
@@ -112,6 +132,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_unblock(p_nt4ga_stat->mp_nthw_rpf);
+
p_nt4ga_stat->mp_stat_structs_color =
calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 2aee3f8425..ed24a892ec 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -8,6 +8,7 @@
#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_rpf.h"
#include "nthw_fpga_model.h"
#define NT_MAX_COLOR_FLOW_STATS 0x400
@@ -102,6 +103,7 @@ struct flm_counters_v1 {
struct nt4ga_stat_s {
nthw_stat_t *mp_nthw_stat;
nthw_rmc_t *mp_nthw_rmc;
+ nthw_rpf_t *mp_nthw_rpf;
struct nt_dma_s *p_stat_dma;
uint32_t *p_stat_dma_virtual;
uint32_t n_stat_size;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 216341bb11..ed5a201fd5 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_iic.c',
'nthw/core/nthw_mac_pcs.c',
'nthw/core/nthw_pcie3.c',
+ 'nthw/core/nthw_rpf.c',
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
new file mode 100644
index 0000000000..4c6c57ba55
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -0,0 +1,48 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTHW_RPF_HPP_
+#define NTHW_RPF_HPP_
+
+#include "nthw_fpga_model.h"
+#include "pthread.h"
+struct nthw_rpf {
+ nthw_fpga_t *mp_fpga;
+
+ nthw_module_t *m_mod_rpf;
+
+ int mn_instance;
+
+ nthw_register_t *mp_reg_control;
+ nthw_field_t *mp_fld_control_pen;
+ nthw_field_t *mp_fld_control_rpp_en;
+ nthw_field_t *mp_fld_control_st_tgl_en;
+ nthw_field_t *mp_fld_control_keep_alive_en;
+
+ nthw_register_t *mp_ts_sort_prg;
+ nthw_field_t *mp_fld_ts_sort_prg_maturing_delay;
+ nthw_field_t *mp_fld_ts_sort_prg_ts_at_eof;
+
+ int m_default_maturing_delay;
+ bool m_administrative_block; /* used to enforce license expiry */
+
+ pthread_mutex_t rpf_mutex;
+};
+
+typedef struct nthw_rpf nthw_rpf_t;
+typedef struct nthw_rpf nt_rpf;
+
+nthw_rpf_t *nthw_rpf_new(void);
+void nthw_rpf_delete(nthw_rpf_t *p);
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rpf_administrative_block(nthw_rpf_t *p);
+void nthw_rpf_block(nthw_rpf_t *p);
+void nthw_rpf_unblock(nthw_rpf_t *p);
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay);
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p);
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable);
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p);
+
+#endif
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
new file mode 100644
index 0000000000..81c704d01a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -0,0 +1,119 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+#include "nthw_rpf.h"
+
+nthw_rpf_t *nthw_rpf_new(void)
+{
+ nthw_rpf_t *p = malloc(sizeof(nthw_rpf_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_rpf_t));
+
+ return p;
+}
+
+void nthw_rpf_delete(nthw_rpf_t *p)
+{
+ if (p) {
+ memset(p, 0, sizeof(nthw_rpf_t));
+ free(p);
+ }
+}
+
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *p_mod = nthw_fpga_query_module(p_fpga, MOD_RPF, n_instance);
+
+ if (p == NULL)
+ return p_mod == NULL ? -1 : 0;
+
+ if (p_mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: MOD_RPF %d: no such instance",
+ p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance);
+ return -1;
+ }
+
+ p->m_mod_rpf = p_mod;
+
+ p->mp_fpga = p_fpga;
+
+ p->m_administrative_block = false;
+
+ /* CONTROL */
+ p->mp_reg_control = nthw_module_get_register(p->m_mod_rpf, RPF_CONTROL);
+ p->mp_fld_control_pen = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_PEN);
+ p->mp_fld_control_rpp_en = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_RPP_EN);
+ p->mp_fld_control_st_tgl_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_ST_TGL_EN);
+ p->mp_fld_control_keep_alive_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_KEEP_ALIVE_EN);
+
+ /* TS_SORT_PRG */
+ p->mp_ts_sort_prg = nthw_module_get_register(p->m_mod_rpf, RPF_TS_SORT_PRG);
+ p->mp_fld_ts_sort_prg_maturing_delay =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_MATURING_DELAY);
+ p->mp_fld_ts_sort_prg_ts_at_eof =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_TS_AT_EOF);
+ p->m_default_maturing_delay =
+ nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
+
+ /* Initialize mutex */
+ pthread_mutex_init(&p->rpf_mutex, NULL);
+ return 0;
+}
+
+void nthw_rpf_administrative_block(nthw_rpf_t *p)
+{
+ /* block all MAC ports */
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+
+ p->m_administrative_block = true;
+}
+
+void nthw_rpf_block(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+}
+
+void nthw_rpf_unblock(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+
+ nthw_field_set_val32(p->mp_fld_control_pen, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_rpp_en, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_st_tgl_en, 1);
+ nthw_field_set_val_flush32(p->mp_fld_control_keep_alive_en, 1);
+}
+
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_maturing_delay, (uint32_t)delay);
+}
+
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ /* Maturing delay is a two's complement 18 bit value, so we retrieve it as signed */
+ return nthw_field_get_signed(p->mp_fld_ts_sort_prg_maturing_delay);
+}
+
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_ts_at_eof, enable);
+}
+
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p)
+{
+ return nthw_field_get_updated(p->mp_fld_ts_sort_prg_ts_at_eof);
+}
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
index 4d495f5b96..9eaaeb550d 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
@@ -1050,6 +1050,18 @@ uint32_t nthw_field_get_val32(const nthw_field_t *p)
return val;
}
+int32_t nthw_field_get_signed(const nthw_field_t *p)
+{
+ uint32_t val;
+
+ nthw_field_get_val(p, &val, 1);
+
+ if (val & (1U << nthw_field_get_bit_pos_high(p))) /* check sign */
+ val = val | ~nthw_field_get_mask(p); /* sign extension */
+
+ return (int32_t)val; /* cast to signed value */
+}
+
uint32_t nthw_field_get_updated(const nthw_field_t *p)
{
uint32_t val;
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
index 7956f0689e..d4e7ab3edd 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
@@ -227,6 +227,7 @@ void nthw_field_get_val(const nthw_field_t *p, uint32_t *p_data, uint32_t len);
void nthw_field_set_val(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
void nthw_field_set_val_flush(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
uint32_t nthw_field_get_val32(const nthw_field_t *p);
+int32_t nthw_field_get_signed(const nthw_field_t *p);
uint32_t nthw_field_get_updated(const nthw_field_t *p);
void nthw_field_update_register(const nthw_field_t *p);
void nthw_field_flush_register(const nthw_field_t *p);
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index ddc144dc02..03122acaf5 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,6 +41,7 @@
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
#define MOD_RPL (0x6de535c3UL)
+#define MOD_RPF (0x8d30dcddUL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 8f196f885f..7067f4b1d0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -39,6 +39,7 @@
#include "nthw_fpga_reg_defs_qsl.h"
#include "nthw_fpga_reg_defs_rac.h"
#include "nthw_fpga_reg_defs_rmc.h"
+#include "nthw_fpga_reg_defs_rpf.h"
#include "nthw_fpga_reg_defs_rpl.h"
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
new file mode 100644
index 0000000000..72f450b85d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_RPF_
+#define _NTHW_FPGA_REG_DEFS_RPF_
+
+/* RPF */
+#define RPF_CONTROL (0x7a5bdb50UL)
+#define RPF_CONTROL_KEEP_ALIVE_EN (0x80be3ffcUL)
+#define RPF_CONTROL_PEN (0xb23137b8UL)
+#define RPF_CONTROL_RPP_EN (0xdb51f109UL)
+#define RPF_CONTROL_ST_TGL_EN (0x45a6ecfaUL)
+#define RPF_TS_SORT_PRG (0xff1d137eUL)
+#define RPF_TS_SORT_PRG_MATURING_DELAY (0x2a38e127UL)
+#define RPF_TS_SORT_PRG_TS_AT_EOF (0x9f27d433UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_RPF_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 56/73] net/ntnic: add statistics poll
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (54 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 55/73] net/ntnic: add rpf module Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
` (20 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Mechanism which poll statistics module and update values with dma
module.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 343 ++++++++++++++++++
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 78 ++++
.../net/ntnic/nthw/core/include/nthw_rmc.h | 5 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 20 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 1 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 128 +++++++
drivers/net/ntnic/ntnic_ethdev.c | 143 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 +
9 files changed, 721 insertions(+)
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index f733fd5459..3afc5b7853 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -16,6 +16,27 @@
#define DEFAULT_MAX_BPS_SPEED 100e9
+/* Inline timestamp format s pcap 32:32 bits. Convert to nsecs */
+static inline uint64_t timestamp2ns(uint64_t ts)
+{
+ return ((ts) >> 32) * 1000000000 + ((ts) & 0xffffffff);
+}
+
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual);
+
+static int nt4ga_stat_collect(struct adapter_info_s *p_adapter_info, nt4ga_stat_t *p_nt4ga_stat)
+{
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ p_nt4ga_stat->last_timestamp = timestamp2ns(*p_nthw_stat->mp_timestamp);
+ nt4ga_stat_collect_cap_v1_stats(p_adapter_info, p_nt4ga_stat,
+ p_nt4ga_stat->p_stat_dma_virtual);
+
+ return 0;
+}
+
static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
{
const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
@@ -203,9 +224,331 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return 0;
}
+/* Called with stat mutex locked */
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual)
+{
+ (void)p_adapter_info;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL)
+ return -1;
+
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
+ const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
+ int c, h, p;
+
+ if (!p_nthw_stat || !p_nt4ga_stat)
+ return -1;
+
+ if (p_nthw_stat->mn_stat_layout_version < 6) {
+ NT_LOG(ERR, NTNIC, "HW STA module version not supported");
+ return -1;
+ }
+
+ /* RX ports */
+ for (c = 0; c < p_nthw_stat->m_nb_color_counters / 2; c++) {
+ p_nt4ga_stat->mp_stat_structs_color[c].color_packets += p_stat_dma_virtual[c * 2];
+ p_nt4ga_stat->mp_stat_structs_color[c].color_bytes +=
+ p_stat_dma_virtual[c * 2 + 1];
+ }
+
+ /* Move to Host buffer counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_color_counters;
+
+ for (h = 0; h < p_nthw_stat->m_nb_rx_host_buffers; h++) {
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_packets += p_stat_dma_virtual[h * 8];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_packets += p_stat_dma_virtual[h * 8 + 1];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_packets += p_stat_dma_virtual[h * 8 + 2];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_packets +=
+ p_stat_dma_virtual[h * 8 + 3];
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_bytes += p_stat_dma_virtual[h * 8 + 4];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_bytes += p_stat_dma_virtual[h * 8 + 5];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_bytes += p_stat_dma_virtual[h * 8 + 6];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_bytes +=
+ p_stat_dma_virtual[h * 8 + 7];
+ }
+
+ /* Move to Rx Port counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_rx_hb_counters;
+
+ /* RX ports */
+ for (p = 0; p < n_rx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 23];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].duplicate +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 24];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_ip_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 25];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_udp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 26];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_tcp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 27];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_giant_undersize +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 28];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_baby_giant +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 29];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_not_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 30];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 31];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 32];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 33];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 34];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 35];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 36];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 37];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 43];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 44];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 45];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 46];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 47]
+ : 0;
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 48];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 49];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 50];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 51];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 52];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 53];
+
+ /* Rx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41] +
+ (p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0);
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_rx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+ p_nt4ga_stat->a_port_rx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_rx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Move to Tx Port counters */
+ p_stat_dma_virtual += n_rx_ports * p_nthw_stat->m_nb_rx_port_counters;
+
+ for (p = 0; p < n_tx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 23];
+
+ /* Tx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_tx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+ p_nt4ga_stat->a_port_tx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_tx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Update and get port load counters */
+ for (p = 0; p < n_rx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ for (p = 0; p < n_tx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ return 0;
+}
+
static struct nt4ga_stat_ops ops = {
.nt4ga_stat_init = nt4ga_stat_init,
.nt4ga_stat_setup = nt4ga_stat_setup,
+ .nt4ga_stat_collect = nt4ga_stat_collect
};
void nt4ga_stat_ops_init(void)
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 1135e9a539..38e4d0ca35 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -16,6 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
+ rte_thread_t stat_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index ed24a892ec..0735dbc085 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -85,16 +85,87 @@ struct color_counters {
};
struct host_buffer_counters {
+ uint64_t flush_packets;
+ uint64_t drop_packets;
+ uint64_t fwd_packets;
+ uint64_t dbs_drop_packets;
+ uint64_t flush_bytes;
+ uint64_t drop_bytes;
+ uint64_t fwd_bytes;
+ uint64_t dbs_drop_bytes;
};
struct port_load_counters {
+ uint64_t rx_pps;
uint64_t rx_pps_max;
+ uint64_t tx_pps;
uint64_t tx_pps_max;
+ uint64_t rx_bps;
uint64_t rx_bps_max;
+ uint64_t tx_bps;
uint64_t tx_bps_max;
};
struct port_counters_v2 {
+ /* Rx/Tx common port counters */
+ uint64_t drop_events;
+ uint64_t pkts;
+ /* FPGA counters */
+ uint64_t octets;
+ uint64_t broadcast_pkts;
+ uint64_t multicast_pkts;
+ uint64_t unicast_pkts;
+ uint64_t pkts_alignment;
+ uint64_t pkts_code_violation;
+ uint64_t pkts_crc;
+ uint64_t undersize_pkts;
+ uint64_t oversize_pkts;
+ uint64_t fragments;
+ uint64_t jabbers_not_truncated;
+ uint64_t jabbers_truncated;
+ uint64_t pkts_64_octets;
+ uint64_t pkts_65_to_127_octets;
+ uint64_t pkts_128_to_255_octets;
+ uint64_t pkts_256_to_511_octets;
+ uint64_t pkts_512_to_1023_octets;
+ uint64_t pkts_1024_to_1518_octets;
+ uint64_t pkts_1519_to_2047_octets;
+ uint64_t pkts_2048_to_4095_octets;
+ uint64_t pkts_4096_to_8191_octets;
+ uint64_t pkts_8192_to_max_octets;
+ uint64_t mac_drop_events;
+ uint64_t pkts_lr;
+ /* Rx only port counters */
+ uint64_t duplicate;
+ uint64_t pkts_ip_chksum_error;
+ uint64_t pkts_udp_chksum_error;
+ uint64_t pkts_tcp_chksum_error;
+ uint64_t pkts_giant_undersize;
+ uint64_t pkts_baby_giant;
+ uint64_t pkts_not_isl_vlan_mpls;
+ uint64_t pkts_isl;
+ uint64_t pkts_vlan;
+ uint64_t pkts_isl_vlan;
+ uint64_t pkts_mpls;
+ uint64_t pkts_isl_mpls;
+ uint64_t pkts_vlan_mpls;
+ uint64_t pkts_isl_vlan_mpls;
+ uint64_t pkts_no_filter;
+ uint64_t pkts_dedup_drop;
+ uint64_t pkts_filter_drop;
+ uint64_t pkts_overflow;
+ uint64_t pkts_dbs_drop;
+ uint64_t octets_no_filter;
+ uint64_t octets_dedup_drop;
+ uint64_t octets_filter_drop;
+ uint64_t octets_overflow;
+ uint64_t octets_dbs_drop;
+ uint64_t ipft_first_hit;
+ uint64_t ipft_first_not_hit;
+ uint64_t ipft_mid_hit;
+ uint64_t ipft_mid_not_hit;
+ uint64_t ipft_last_hit;
+ uint64_t ipft_last_not_hit;
};
struct flm_counters_v1 {
@@ -147,6 +218,8 @@ struct nt4ga_stat_s {
uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_drops_total[NUM_ADAPTER_PORTS_MAX];
};
typedef struct nt4ga_stat_s nt4ga_stat_t;
@@ -159,4 +232,9 @@ int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
uint32_t *p_stat_dma_virtual);
int nthw_stat_trigger(nthw_stat_t *p);
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index b239752674..9c40804cd9 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -47,4 +47,9 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p);
+
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 748519aeb4..570a179fc8 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,26 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_sf_ram_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_descr_fifo_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p)
+{
+ return (p->mp_reg_dbg) ? nthw_field_get_updated(p->mp_fld_dbg_merge) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p)
+{
+ return (p->mp_reg_mac_if) ? nthw_field_get_updated(p->mp_fld_mac_if_err) : 0xffffffff;
+}
+
void nthw_rmc_block(nthw_rmc_t *p)
{
/* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index ea27f96865..2c2e4d9d21 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
+#include "ntlog.h"
#include "ntnic_mod_reg.h"
#include "flow_api.h"
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
index 6adcd2e090..078eec5e1f 100644
--- a/drivers/net/ntnic/nthw/stat/nthw_stat.c
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -368,3 +368,131 @@ int nthw_stat_trigger(nthw_stat_t *p)
return 0;
}
+
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 86876ecda6..f94340f489 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -4,6 +4,9 @@
*/
#include <stdint.h>
+#include <stdarg.h>
+
+#include <signal.h>
#include <rte_eal.h>
#include <rte_dev.h>
@@ -25,6 +28,7 @@
#include "nt_util.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
#define THREAD_JOIN(a) rte_thread_join(a, NULL)
#define THREAD_FUNC static uint32_t
@@ -67,6 +71,9 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
uint64_t rte_tsc_freq;
+static void (*previous_handler)(int sig);
+static rte_thread_t shutdown_tid;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -1407,6 +1414,7 @@ drv_deinit(struct drv_s *p_drv)
/* stop statistics threads */
p_drv->ntdrv.b_shutdown = true;
+ THREAD_JOIN(p_nt_drv->stat_thread);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
@@ -1628,6 +1636,87 @@ THREAD_FUNC adapter_flm_update_thread_fn(void *context)
return THREAD_RETURN;
}
+/*
+ * Adapter stat thread
+ */
+THREAD_FUNC adapter_stat_thread_fn(void *context)
+{
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
+
+ if (nt4ga_stat_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "Statistics module uninitialized");
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const char *const p_adapter_id_str = p_nt_drv->adapter_info.mp_adapter_id_str;
+ (void)p_adapter_id_str;
+
+ if (!p_nthw_stat)
+ return THREAD_RETURN;
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: begin", p_adapter_id_str);
+
+ assert(p_nthw_stat);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ nt_os_wait_usec(10 * 1000);
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ uint32_t loop = 0;
+
+ while ((!p_drv->ntdrv.b_shutdown) &&
+ (*p_nthw_stat->mp_timestamp == (uint64_t)-1)) {
+ nt_os_wait_usec(1 * 100);
+
+ if (rte_log_get_level(nt_log_ntnic) == RTE_LOG_DEBUG &&
+ (++loop & 0x3fff) == 0) {
+ if (p_nt4ga_stat->mp_nthw_rpf) {
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+
+ } else if (p_nt4ga_stat->mp_nthw_rmc) {
+ uint32_t sf_ram_of =
+ nthw_rmc_get_status_sf_ram_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+ uint32_t descr_fifo_of =
+ nthw_rmc_get_status_descr_fifo_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+
+ uint32_t dbg_merge =
+ nthw_rmc_get_dbg_merge(p_nt4ga_stat->mp_nthw_rmc);
+ uint32_t mac_if_err =
+ nthw_rmc_get_mac_if_err(p_nt4ga_stat->mp_nthw_rmc);
+
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+ NT_LOG(ERR, NTNIC, "SF RAM Overflow : %08x",
+ sf_ram_of);
+ NT_LOG(ERR, NTNIC, "Descr Fifo Overflow : %08x",
+ descr_fifo_of);
+ NT_LOG(ERR, NTNIC, "DBG Merge : %08x",
+ dbg_merge);
+ NT_LOG(ERR, NTNIC, "MAC If Errors : %08x",
+ mac_if_err);
+ }
+ }
+ }
+
+ /* Check then collect */
+ {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ }
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: end", p_adapter_id_str);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1885,6 +1974,16 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
+ pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
+ (void *)p_drv);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
@@ -2075,6 +2174,48 @@ nthw_pci_dev_deinit(struct rte_eth_dev *eth_dev __rte_unused)
return 0;
}
+static void signal_handler_func_int(int sig)
+{
+ if (sig != SIGINT) {
+ signal(sig, previous_handler);
+ raise(sig);
+ return;
+ }
+
+ kill_pmd = 1;
+}
+
+THREAD_FUNC shutdown_thread(void *arg __rte_unused)
+{
+ while (!kill_pmd)
+ nt_os_wait_usec(100 * 1000);
+
+ NT_LOG_DBGX(DBG, NTNIC, "Shutting down because of ctrl+C");
+
+ signal(SIGINT, previous_handler);
+ raise(SIGINT);
+
+ return THREAD_RETURN;
+}
+
+static int init_shutdown(void)
+{
+ NT_LOG(DBG, NTNIC, "Starting shutdown handler");
+ kill_pmd = 0;
+ previous_handler = signal(SIGINT, signal_handler_func_int);
+ THREAD_CREATE(&shutdown_tid, shutdown_thread, NULL);
+
+ /*
+ * 1 time calculation of 1 sec stat update rtc cycles to prevent stat poll
+ * flooding by OVS from multiple virtual port threads - no need to be precise
+ */
+ uint64_t now_rtc = rte_get_tsc_cycles();
+ nt_os_wait_usec(10 * 1000);
+ rte_tsc_freq = 100 * (rte_get_tsc_cycles() - now_rtc);
+
+ return 0;
+}
+
static int
nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
@@ -2117,6 +2258,8 @@ nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
ret = nthw_pci_dev_init(pci_dev);
+ init_shutdown();
+
NT_LOG_DBGX(DBG, NTNIC, "leave: ret=%d", ret);
return ret;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 30b9afb7d3..8b825d8c48 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -186,6 +186,8 @@ void port_init(void);
struct nt4ga_stat_ops {
int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_collect)(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat);
};
void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 57/73] net/ntnic: added flm stat interface
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (55 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 56/73] net/ntnic: add statistics poll Serhii Iliushyk
@ 2024-10-21 21:04 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 58/73] net/ntnic: add tsm module Serhii Iliushyk
` (19 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:04 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flm stat module interface was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 2 ++
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 ++
4 files changed, 16 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 4a1525f237..ed96f77bc0 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -233,4 +233,6 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_filter.h b/drivers/net/ntnic/include/flow_filter.h
index d204c0d882..01777f8c9f 100644
--- a/drivers/net/ntnic/include/flow_filter.h
+++ b/drivers/net/ntnic/include/flow_filter.h
@@ -11,5 +11,6 @@
int flow_filter_init(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device, int adapter_no);
int flow_filter_done(struct flow_nic_dev *dev);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
#endif /* __FLOW_FILTER_HPP__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 2c2e4d9d21..ce28fd2fa1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1058,6 +1058,16 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ (void)ndev;
+ (void)data;
+ (void)size;
+
+ NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
+ return -1;
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
@@ -1072,6 +1082,7 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+ .flow_get_flm_stats = flow_get_flm_stats,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8b825d8c48..8703d478b6 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -336,6 +336,8 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
/*
* Other
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 58/73] net/ntnic: add tsm module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (56 preceding siblings ...)
2024-10-21 21:04 ` [PATCH v1 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 59/73] net/ntnic: add STA module Serhii Iliushyk
` (18 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
tsm module which operate with timers
in the physical nic was added.
Necessary defines and implementation were added.
The Time Stamp Module controls every aspect of packet timestamping,
including time synchronization, time stamp format, PTP protocol, etc.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 ++++++
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +++++
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 28 +++
7 files changed, 301 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index ed5a201fd5..a6c4fec0be 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -41,6 +41,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
'nthw/core/nthw_gmf.c',
+ 'nthw/core/nthw_tsm.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_tsm.h b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
new file mode 100644
index 0000000000..0a3bcdcaf5
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_TSM_H__
+#define __NTHW_TSM_H__
+
+#include "stdint.h"
+
+#include "nthw_fpga_model.h"
+
+struct nthw_tsm {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_tsm;
+ int mn_instance;
+
+ nthw_field_t *mp_fld_config_ts_format;
+
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t0;
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t1;
+
+ nthw_field_t *mp_fld_timer_timer_t0_max_count;
+
+ nthw_field_t *mp_fld_timer_timer_t1_max_count;
+
+ nthw_register_t *mp_reg_ts_lo;
+ nthw_field_t *mp_fld_ts_lo;
+
+ nthw_register_t *mp_reg_ts_hi;
+ nthw_field_t *mp_fld_ts_hi;
+
+ nthw_register_t *mp_reg_time_lo;
+ nthw_field_t *mp_fld_time_lo;
+
+ nthw_register_t *mp_reg_time_hi;
+ nthw_field_t *mp_fld_time_hi;
+};
+
+typedef struct nthw_tsm nthw_tsm_t;
+typedef struct nthw_tsm nthw_tsm;
+
+nthw_tsm_t *nthw_tsm_new(void);
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts);
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time);
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val);
+
+#endif /* __NTHW_TSM_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_fpga.c b/drivers/net/ntnic/nthw/core/nthw_fpga.c
index 9448c29de1..ca69a9d5b1 100644
--- a/drivers/net/ntnic/nthw/core/nthw_fpga.c
+++ b/drivers/net/ntnic/nthw/core/nthw_fpga.c
@@ -13,6 +13,8 @@
#include "nthw_fpga_instances.h"
#include "nthw_fpga_mod_str_map.h"
+#include "nthw_tsm.h"
+
#include <arpa/inet.h>
int nthw_fpga_get_param_info(struct fpga_info_s *p_fpga_info, nthw_fpga_t *p_fpga)
@@ -179,6 +181,7 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
nthw_hif_t *p_nthw_hif = NULL;
nthw_pcie3_t *p_nthw_pcie3 = NULL;
nthw_rac_t *p_nthw_rac = NULL;
+ nthw_tsm_t *p_nthw_tsm = NULL;
mcu_info_t *p_mcu_info = &p_fpga_info->mcu_info;
uint64_t n_fpga_ident = 0;
@@ -331,6 +334,50 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
p_fpga_info->mp_nthw_hif = p_nthw_hif;
+ p_nthw_tsm = nthw_tsm_new();
+
+ if (p_nthw_tsm) {
+ nthw_tsm_init(p_nthw_tsm, p_fpga, 0);
+
+ nthw_tsm_set_config_ts_format(p_nthw_tsm, 1); /* 1 = TSM: TS format native */
+
+ /* Timer T0 - stat toggle timer */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t0_max_count(p_nthw_tsm, 50 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, true);
+
+ /* Timer T1 - keep alive timer */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t1_max_count(p_nthw_tsm, 100 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, true);
+ }
+
+ p_fpga_info->mp_nthw_tsm = p_nthw_tsm;
+
+ /* TSM sample triggering: test validation... */
+#if defined(DEBUG) && (1)
+ {
+ uint64_t n_time, n_ts;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (p_nthw_hif)
+ nthw_hif_trigger_sample_time(p_nthw_hif);
+
+ else if (p_nthw_pcie3)
+ nthw_pcie3_trigger_sample_time(p_nthw_pcie3);
+
+ nthw_tsm_get_time(p_nthw_tsm, &n_time);
+ nthw_tsm_get_ts(p_nthw_tsm, &n_ts);
+
+ NT_LOG(DBG, NTHW, "%s: TSM time: %016" PRIX64 " %016" PRIX64 "\n",
+ p_adapter_id_str, n_time, n_ts);
+
+ nt_os_wait_usec(1000);
+ }
+ }
+#endif
+
return res;
}
diff --git a/drivers/net/ntnic/nthw/core/nthw_tsm.c b/drivers/net/ntnic/nthw/core/nthw_tsm.c
new file mode 100644
index 0000000000..b88dcb9b0b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_tsm.c
@@ -0,0 +1,167 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_tsm.h"
+
+nthw_tsm_t *nthw_tsm_new(void)
+{
+ nthw_tsm_t *p = malloc(sizeof(nthw_tsm_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_tsm_t));
+
+ return p;
+}
+
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_TSM, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: TSM %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_tsm = mod;
+
+ {
+ nthw_register_t *p_reg;
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_CONFIG);
+ p->mp_fld_config_ts_format = nthw_register_get_field(p_reg, TSM_CONFIG_TS_FORMAT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_CTRL);
+ p->mp_fld_timer_ctrl_timer_en_t0 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T0);
+ p->mp_fld_timer_ctrl_timer_en_t1 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T1);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T0);
+ p->mp_fld_timer_timer_t0_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T0_MAX_COUNT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T1);
+ p->mp_fld_timer_timer_t1_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T1_MAX_COUNT);
+
+ p->mp_reg_time_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_LO);
+ p_reg = p->mp_reg_time_lo;
+ p->mp_fld_time_lo = nthw_register_get_field(p_reg, TSM_TIME_LO_NS);
+
+ p->mp_reg_time_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_HI);
+ p_reg = p->mp_reg_time_hi;
+ p->mp_fld_time_hi = nthw_register_get_field(p_reg, TSM_TIME_HI_SEC);
+
+ p->mp_reg_ts_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_LO);
+ p_reg = p->mp_reg_ts_lo;
+ p->mp_fld_ts_lo = nthw_register_get_field(p_reg, TSM_TS_LO_TIME);
+
+ p->mp_reg_ts_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_HI);
+ p_reg = p->mp_reg_ts_hi;
+ p->mp_fld_ts_hi = nthw_register_get_field(p_reg, TSM_TS_HI_TIME);
+ }
+ return 0;
+}
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts)
+{
+ uint32_t n_ts_lo, n_ts_hi;
+ uint64_t val;
+
+ if (!p_ts)
+ return -1;
+
+ n_ts_lo = nthw_field_get_updated(p->mp_fld_ts_lo);
+ n_ts_hi = nthw_field_get_updated(p->mp_fld_ts_hi);
+
+ val = ((((uint64_t)n_ts_hi) << 32UL) | n_ts_lo);
+
+ if (p_ts)
+ *p_ts = val;
+
+ return 0;
+}
+
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time)
+{
+ uint32_t n_time_lo, n_time_hi;
+ uint64_t val;
+
+ if (!p_time)
+ return -1;
+
+ n_time_lo = nthw_field_get_updated(p->mp_fld_time_lo);
+ n_time_hi = nthw_field_get_updated(p->mp_fld_time_hi);
+
+ val = ((((uint64_t)n_time_hi) << 32UL) | n_time_lo);
+
+ if (p_time)
+ *p_time = val;
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T0 - stat toggle timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t0_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t0_max_count,
+ n_timer_val); /* ns (50*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T1 - keep alive timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t1_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t1_max_count,
+ n_timer_val); /* ns (100*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val)
+{
+ nthw_field_update_register(p->mp_fld_config_ts_format);
+ /* 0x1: Native - 10ns units, start date: 1970-01-01. */
+ nthw_field_set_val_flush32(p->mp_fld_config_ts_format, n_val);
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 03122acaf5..e6ed9e714b 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -48,6 +48,7 @@
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_STA (0x76fae64dUL)
+#define MOD_TSM (0x35422a24UL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7067f4b1d0..4d299c6aa8 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -44,6 +44,7 @@
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
#include "nthw_fpga_reg_defs_sdc.h"
+#include "nthw_fpga_reg_defs_tsm.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
#include "nthw_fpga_reg_defs_sta.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
new file mode 100644
index 0000000000..a087850aa4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_TSM_
+#define _NTHW_FPGA_REG_DEFS_TSM_
+
+/* TSM */
+#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_TIMER_CTRL (0x648da051UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
+#define TSM_TIMER_T0 (0x417217a5UL)
+#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
+#define TSM_TIMER_T1 (0x36752733UL)
+#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HI (0x175acea1UL)
+#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
+#define TSM_TIME_LO (0x9a55ae90UL)
+#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TS_HI (0xccfe9e5eUL)
+#define TSM_TS_HI_TIME (0xc23fed30UL)
+#define TSM_TS_LO (0x41f1fe6fUL)
+#define TSM_TS_LO_TIME (0xe0292a3eUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 59/73] net/ntnic: add STA module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (57 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 58/73] net/ntnic: add tsm module Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 60/73] net/ntnic: add TSM module Serhii Iliushyk
` (17 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with STA module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 92 ++++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 8 ++
3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index a3d9f94fc6..efdb084cd6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2486,6 +2486,95 @@ static nthw_fpga_register_init_s slc_registers[] = {
{ SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
};
+static nthw_fpga_field_init_s sta_byte_fields[] = {
+ { STA_BYTE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_cfg_fields[] = {
+ { STA_CFG_CNT_CLEAR, 1, 1, 0 },
+ { STA_CFG_DMA_ENA, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_cv_err_fields[] = {
+ { STA_CV_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_fcs_err_fields[] = {
+ { STA_FCS_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_lsb_fields[] = {
+ { STA_HOST_ADR_LSB_LSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_msb_fields[] = {
+ { STA_HOST_ADR_MSB_MSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_load_bin_fields[] = {
+ { STA_LOAD_BIN_BIN, 32, 0, 8388607 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_0_fields[] = {
+ { STA_LOAD_BPS_RX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_1_fields[] = {
+ { STA_LOAD_BPS_RX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_0_fields[] = {
+ { STA_LOAD_BPS_TX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_1_fields[] = {
+ { STA_LOAD_BPS_TX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_0_fields[] = {
+ { STA_LOAD_PPS_RX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_1_fields[] = {
+ { STA_LOAD_PPS_RX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_0_fields[] = {
+ { STA_LOAD_PPS_TX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_1_fields[] = {
+ { STA_LOAD_PPS_TX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_pckt_fields[] = {
+ { STA_PCKT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_status_fields[] = {
+ { STA_STATUS_STAT_TOGGLE_MISSED, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s sta_registers[] = {
+ { STA_BYTE, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_byte_fields },
+ { STA_CFG, 0, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, sta_cfg_fields },
+ { STA_CV_ERR, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_cv_err_fields },
+ { STA_FCS_ERR, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_fcs_err_fields },
+ { STA_HOST_ADR_LSB, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_lsb_fields },
+ { STA_HOST_ADR_MSB, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_msb_fields },
+ { STA_LOAD_BIN, 8, 32, NTHW_FPGA_REG_TYPE_WO, 8388607, 1, sta_load_bin_fields },
+ { STA_LOAD_BPS_RX_0, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_0_fields },
+ { STA_LOAD_BPS_RX_1, 13, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_1_fields },
+ { STA_LOAD_BPS_TX_0, 15, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_0_fields },
+ { STA_LOAD_BPS_TX_1, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_1_fields },
+ { STA_LOAD_PPS_RX_0, 10, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_0_fields },
+ { STA_LOAD_PPS_RX_1, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_1_fields },
+ { STA_LOAD_PPS_TX_0, 14, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_0_fields },
+ { STA_LOAD_PPS_TX_1, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_1_fields },
+ { STA_PCKT, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_pckt_fields },
+ { STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2537,6 +2626,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
+ { MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2695,5 +2785,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index 150b9dd976..a2ab266931 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -19,5 +19,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RAC, "RAC" },
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
+ { MOD_STA, "STA" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
index 640ffcbc52..0cd183fcaa 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -7,11 +7,17 @@
#define _NTHW_FPGA_REG_DEFS_STA_
/* STA */
+#define STA_BYTE (0xa08364d4UL)
+#define STA_BYTE_CNT (0x3119e6bcUL)
#define STA_CFG (0xcecaf9f4UL)
#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
#define STA_CFG_CNT_FRZ (0x8c27a596UL)
#define STA_CFG_DMA_ENA (0x940dbacUL)
#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_CV_ERR (0x7db7db5dUL)
+#define STA_CV_ERR_CNT (0x2c02fbbeUL)
+#define STA_FCS_ERR (0xa0de1647UL)
+#define STA_FCS_ERR_CNT (0xc68c37d1UL)
#define STA_HOST_ADR_LSB (0xde569336UL)
#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
#define STA_HOST_ADR_MSB (0xdf94f901UL)
@@ -34,6 +40,8 @@
#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_PCKT (0xecc8f30aUL)
+#define STA_PCKT_CNT (0x63291d16UL)
#define STA_STATUS (0x91c5c51cUL)
#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 60/73] net/ntnic: add TSM module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (58 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 59/73] net/ntnic: add STA module Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 61/73] net/ntnic: add xstats Serhii Iliushyk
` (16 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with tsm module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../supported/nthw_fpga_9563_055_049_0000.c | 394 +++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 177 ++++++++
4 files changed, 572 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e5d5abd0ed..64351bcdc7 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,6 +12,7 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
+Basic stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efdb084cd6..620968ceb6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2575,6 +2575,397 @@ static nthw_fpga_register_init_s sta_registers[] = {
{ STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
};
+static nthw_fpga_field_init_s tsm_con0_config_fields[] = {
+ { TSM_CON0_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON0_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON0_CONFIG_PORT, 3, 0, 0 }, { TSM_CON0_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON0_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_interface_fields[] = {
+ { TSM_CON0_INTERFACE_EX_TERM, 2, 0, 3 }, { TSM_CON0_INTERFACE_IN_REF_PWM, 8, 12, 128 },
+ { TSM_CON0_INTERFACE_PWM_ENA, 1, 2, 0 }, { TSM_CON0_INTERFACE_RESERVED, 1, 3, 0 },
+ { TSM_CON0_INTERFACE_VTERM_PWM, 8, 4, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_hi_fields[] = {
+ { TSM_CON0_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_lo_fields[] = {
+ { TSM_CON0_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_config_fields[] = {
+ { TSM_CON1_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON1_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON1_CONFIG_PORT, 3, 0, 0 }, { TSM_CON1_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON1_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_hi_fields[] = {
+ { TSM_CON1_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_lo_fields[] = {
+ { TSM_CON1_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_config_fields[] = {
+ { TSM_CON2_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON2_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON2_CONFIG_PORT, 3, 0, 0 }, { TSM_CON2_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON2_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_hi_fields[] = {
+ { TSM_CON2_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_lo_fields[] = {
+ { TSM_CON2_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_config_fields[] = {
+ { TSM_CON3_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON3_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON3_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_hi_fields[] = {
+ { TSM_CON3_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_lo_fields[] = {
+ { TSM_CON3_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_config_fields[] = {
+ { TSM_CON4_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON4_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON4_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_hi_fields[] = {
+ { TSM_CON4_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_lo_fields[] = {
+ { TSM_CON4_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_config_fields[] = {
+ { TSM_CON5_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON5_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON5_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_hi_fields[] = {
+ { TSM_CON5_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_lo_fields[] = {
+ { TSM_CON5_SAMPLE_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_config_fields[] = {
+ { TSM_CON6_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON6_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON6_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_hi_fields[] = {
+ { TSM_CON6_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_lo_fields[] = {
+ { TSM_CON6_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_hi_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_lo_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_config_fields[] = {
+ { TSM_CONFIG_NTTS_SRC, 2, 5, 0 }, { TSM_CONFIG_NTTS_SYNC, 1, 4, 0 },
+ { TSM_CONFIG_TIMESET_EDGE, 2, 8, 1 }, { TSM_CONFIG_TIMESET_SRC, 3, 10, 0 },
+ { TSM_CONFIG_TIMESET_UP, 1, 7, 0 }, { TSM_CONFIG_TS_FORMAT, 4, 0, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_int_config_fields[] = {
+ { TSM_INT_CONFIG_AUTO_DISABLE, 1, 0, 0 },
+ { TSM_INT_CONFIG_MASK, 19, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_int_stat_fields[] = {
+ { TSM_INT_STAT_CAUSE, 19, 1, 0 },
+ { TSM_INT_STAT_ENABLE, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_led_fields[] = {
+ { TSM_LED_LED0_BG_COLOR, 2, 3, 0 }, { TSM_LED_LED0_COLOR, 2, 1, 0 },
+ { TSM_LED_LED0_MODE, 1, 0, 0 }, { TSM_LED_LED0_SRC, 4, 5, 0 },
+ { TSM_LED_LED1_BG_COLOR, 2, 12, 0 }, { TSM_LED_LED1_COLOR, 2, 10, 0 },
+ { TSM_LED_LED1_MODE, 1, 9, 0 }, { TSM_LED_LED1_SRC, 4, 14, 1 },
+ { TSM_LED_LED2_BG_COLOR, 2, 21, 0 }, { TSM_LED_LED2_COLOR, 2, 19, 0 },
+ { TSM_LED_LED2_MODE, 1, 18, 0 }, { TSM_LED_LED2_SRC, 4, 23, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_config_fields[] = {
+ { TSM_NTTS_CONFIG_AUTO_HARDSET, 1, 5, 1 },
+ { TSM_NTTS_CONFIG_EXT_CLK_ADJ, 1, 6, 0 },
+ { TSM_NTTS_CONFIG_HIGH_SAMPLE, 1, 4, 0 },
+ { TSM_NTTS_CONFIG_TS_SRC_FORMAT, 4, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ext_stat_fields[] = {
+ { TSM_NTTS_EXT_STAT_MASTER_ID, 8, 16, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_REV, 8, 24, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_STAT, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_hi_fields[] = {
+ { TSM_NTTS_LIMIT_HI_SEC, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_lo_fields[] = {
+ { TSM_NTTS_LIMIT_LO_NS, 32, 0, 100000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_offset_fields[] = {
+ { TSM_NTTS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_hi_fields[] = {
+ { TSM_NTTS_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_lo_fields[] = {
+ { TSM_NTTS_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_stat_fields[] = {
+ { TSM_NTTS_STAT_NTTS_VALID, 1, 0, 0 },
+ { TSM_NTTS_STAT_SIGNAL_LOST, 8, 1, 0 },
+ { TSM_NTTS_STAT_SYNC_LOST, 8, 9, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_hi_fields[] = {
+ { TSM_NTTS_TS_T0_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_lo_fields[] = {
+ { TSM_NTTS_TS_T0_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_offset_fields[] = {
+ { TSM_NTTS_TS_T0_OFFSET_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_ctrl_fields[] = {
+ { TSM_PB_CTRL_INSTMEM_WR, 1, 1, 0 },
+ { TSM_PB_CTRL_RST, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_instmem_fields[] = {
+ { TSM_PB_INSTMEM_MEM_ADDR, 14, 0, 0 },
+ { TSM_PB_INSTMEM_MEM_DATA, 18, 14, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_i_fields[] = {
+ { TSM_PI_CTRL_I_VAL, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_ki_fields[] = {
+ { TSM_PI_CTRL_KI_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_kp_fields[] = {
+ { TSM_PI_CTRL_KP_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_shl_fields[] = {
+ { TSM_PI_CTRL_SHL_VAL, 4, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_stat_fields[] = {
+ { TSM_STAT_HARD_SYNC, 8, 8, 0 }, { TSM_STAT_LINK_CON0, 1, 0, 0 },
+ { TSM_STAT_LINK_CON1, 1, 1, 0 }, { TSM_STAT_LINK_CON2, 1, 2, 0 },
+ { TSM_STAT_LINK_CON3, 1, 3, 0 }, { TSM_STAT_LINK_CON4, 1, 4, 0 },
+ { TSM_STAT_LINK_CON5, 1, 5, 0 }, { TSM_STAT_NTTS_INSYNC, 1, 6, 0 },
+ { TSM_STAT_PTP_MI_PRESENT, 1, 7, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_ctrl_fields[] = {
+ { TSM_TIMER_CTRL_TIMER_EN_T0, 1, 0, 0 },
+ { TSM_TIMER_CTRL_TIMER_EN_T1, 1, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t0_fields[] = {
+ { TSM_TIMER_T0_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t1_fields[] = {
+ { TSM_TIMER_T1_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_hi_fields[] = {
+ { TSM_TIME_HARDSET_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_lo_fields[] = {
+ { TSM_TIME_HARDSET_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hi_fields[] = {
+ { TSM_TIME_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_lo_fields[] = {
+ { TSM_TIME_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_rate_adj_fields[] = {
+ { TSM_TIME_RATE_ADJ_FRACTION, 29, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_hi_fields[] = {
+ { TSM_TS_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_lo_fields[] = {
+ { TSM_TS_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_offset_fields[] = {
+ { TSM_TS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_fields[] = {
+ { TSM_TS_STAT_OVERRUN, 1, 16, 0 },
+ { TSM_TS_STAT_SAMPLES, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_hi_offset_fields[] = {
+ { TSM_TS_STAT_HI_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_lo_offset_fields[] = {
+ { TSM_TS_STAT_LO_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_hi_fields[] = {
+ { TSM_TS_STAT_TAR_HI_SEC, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_lo_fields[] = {
+ { TSM_TS_STAT_TAR_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x_fields[] = {
+ { TSM_TS_STAT_X_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_hi_fields[] = {
+ { TSM_TS_STAT_X2_HI_NS, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_lo_fields[] = {
+ { TSM_TS_STAT_X2_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_utc_offset_fields[] = {
+ { TSM_UTC_OFFSET_SEC, 8, 0, 0 },
+};
+
+static nthw_fpga_register_init_s tsm_registers[] = {
+ { TSM_CON0_CONFIG, 24, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con0_config_fields },
+ {
+ TSM_CON0_INTERFACE, 25, 20, NTHW_FPGA_REG_TYPE_RW, 524291, 5,
+ tsm_con0_interface_fields
+ },
+ { TSM_CON0_SAMPLE_HI, 27, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_hi_fields },
+ { TSM_CON0_SAMPLE_LO, 26, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_lo_fields },
+ { TSM_CON1_CONFIG, 28, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con1_config_fields },
+ { TSM_CON1_SAMPLE_HI, 30, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_hi_fields },
+ { TSM_CON1_SAMPLE_LO, 29, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_lo_fields },
+ { TSM_CON2_CONFIG, 31, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con2_config_fields },
+ { TSM_CON2_SAMPLE_HI, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_hi_fields },
+ { TSM_CON2_SAMPLE_LO, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_lo_fields },
+ { TSM_CON3_CONFIG, 34, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con3_config_fields },
+ { TSM_CON3_SAMPLE_HI, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_hi_fields },
+ { TSM_CON3_SAMPLE_LO, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_lo_fields },
+ { TSM_CON4_CONFIG, 37, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con4_config_fields },
+ { TSM_CON4_SAMPLE_HI, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_hi_fields },
+ { TSM_CON4_SAMPLE_LO, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_lo_fields },
+ { TSM_CON5_CONFIG, 40, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con5_config_fields },
+ { TSM_CON5_SAMPLE_HI, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_hi_fields },
+ { TSM_CON5_SAMPLE_LO, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_lo_fields },
+ { TSM_CON6_CONFIG, 43, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con6_config_fields },
+ { TSM_CON6_SAMPLE_HI, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_hi_fields },
+ { TSM_CON6_SAMPLE_LO, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_lo_fields },
+ {
+ TSM_CON7_HOST_SAMPLE_HI, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_hi_fields
+ },
+ {
+ TSM_CON7_HOST_SAMPLE_LO, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_lo_fields
+ },
+ { TSM_CONFIG, 0, 13, NTHW_FPGA_REG_TYPE_RW, 257, 6, tsm_config_fields },
+ { TSM_INT_CONFIG, 2, 20, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_int_config_fields },
+ { TSM_INT_STAT, 3, 20, NTHW_FPGA_REG_TYPE_MIXED, 0, 2, tsm_int_stat_fields },
+ { TSM_LED, 4, 27, NTHW_FPGA_REG_TYPE_RW, 16793600, 12, tsm_led_fields },
+ { TSM_NTTS_CONFIG, 13, 7, NTHW_FPGA_REG_TYPE_RW, 32, 4, tsm_ntts_config_fields },
+ { TSM_NTTS_EXT_STAT, 15, 32, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, tsm_ntts_ext_stat_fields },
+ { TSM_NTTS_LIMIT_HI, 23, 16, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_limit_hi_fields },
+ { TSM_NTTS_LIMIT_LO, 22, 32, NTHW_FPGA_REG_TYPE_RW, 100000, 1, tsm_ntts_limit_lo_fields },
+ { TSM_NTTS_OFFSET, 21, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_offset_fields },
+ { TSM_NTTS_SAMPLE_HI, 19, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_hi_fields },
+ { TSM_NTTS_SAMPLE_LO, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_lo_fields },
+ { TSM_NTTS_STAT, 14, 17, NTHW_FPGA_REG_TYPE_RO, 0, 3, tsm_ntts_stat_fields },
+ { TSM_NTTS_TS_T0_HI, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_hi_fields },
+ { TSM_NTTS_TS_T0_LO, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_lo_fields },
+ {
+ TSM_NTTS_TS_T0_OFFSET, 20, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ntts_ts_t0_offset_fields
+ },
+ { TSM_PB_CTRL, 63, 2, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_ctrl_fields },
+ { TSM_PB_INSTMEM, 64, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_instmem_fields },
+ { TSM_PI_CTRL_I, 54, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_i_fields },
+ { TSM_PI_CTRL_KI, 52, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_ki_fields },
+ { TSM_PI_CTRL_KP, 51, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_kp_fields },
+ { TSM_PI_CTRL_SHL, 53, 4, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_shl_fields },
+ { TSM_STAT, 1, 16, NTHW_FPGA_REG_TYPE_RO, 0, 9, tsm_stat_fields },
+ { TSM_TIMER_CTRL, 48, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_timer_ctrl_fields },
+ { TSM_TIMER_T0, 49, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t0_fields },
+ { TSM_TIMER_T1, 50, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t1_fields },
+ { TSM_TIME_HARDSET_HI, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_hi_fields },
+ { TSM_TIME_HARDSET_LO, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_lo_fields },
+ { TSM_TIME_HI, 9, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_hi_fields },
+ { TSM_TIME_LO, 8, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_lo_fields },
+ { TSM_TIME_RATE_ADJ, 10, 29, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_rate_adj_fields },
+ { TSM_TS_HI, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_hi_fields },
+ { TSM_TS_LO, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_lo_fields },
+ { TSM_TS_OFFSET, 7, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ts_offset_fields },
+ { TSM_TS_STAT, 55, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, tsm_ts_stat_fields },
+ {
+ TSM_TS_STAT_HI_OFFSET, 62, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_hi_offset_fields
+ },
+ {
+ TSM_TS_STAT_LO_OFFSET, 61, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_lo_offset_fields
+ },
+ { TSM_TS_STAT_TAR_HI, 57, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_hi_fields },
+ { TSM_TS_STAT_TAR_LO, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_lo_fields },
+ { TSM_TS_STAT_X, 58, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x_fields },
+ { TSM_TS_STAT_X2_HI, 60, 16, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_hi_fields },
+ { TSM_TS_STAT_X2_LO, 59, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_lo_fields },
+ { TSM_UTC_OFFSET, 65, 8, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_utc_offset_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2627,6 +3018,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
{ MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
+ { MOD_TSM, 0, MOD_TSM, 0, 8, NTHW_FPGA_BUS_TYPE_RAB2, 1024, 66, tsm_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2785,5 +3177,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 37, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index a2ab266931..e8ed7faf0d 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -20,5 +20,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
{ MOD_STA, "STA" },
+ { MOD_TSM, "TSM" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
index a087850aa4..cdb733ee17 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -7,8 +7,158 @@
#define _NTHW_FPGA_REG_DEFS_TSM_
/* TSM */
+#define TSM_CON0_CONFIG (0xf893d371UL)
+#define TSM_CON0_CONFIG_BLIND (0x59ccfcbUL)
+#define TSM_CON0_CONFIG_DC_SRC (0x1879812bUL)
+#define TSM_CON0_CONFIG_PORT (0x3ff0bb08UL)
+#define TSM_CON0_CONFIG_PPSIN_2_5V (0xb8e78227UL)
+#define TSM_CON0_CONFIG_SAMPLE_EDGE (0x4a4022ebUL)
+#define TSM_CON0_INTERFACE (0x76e93b59UL)
+#define TSM_CON0_INTERFACE_EX_TERM (0xd079b416UL)
+#define TSM_CON0_INTERFACE_IN_REF_PWM (0x16f73c33UL)
+#define TSM_CON0_INTERFACE_PWM_ENA (0x3629e73fUL)
+#define TSM_CON0_INTERFACE_RESERVED (0xf9c5066UL)
+#define TSM_CON0_INTERFACE_VTERM_PWM (0x6d2b1e23UL)
+#define TSM_CON0_SAMPLE_HI (0x6e536b8UL)
+#define TSM_CON0_SAMPLE_HI_SEC (0x5fc26159UL)
+#define TSM_CON0_SAMPLE_LO (0x8bea5689UL)
+#define TSM_CON0_SAMPLE_LO_NS (0x13d0010dUL)
+#define TSM_CON1_CONFIG (0x3439d3efUL)
+#define TSM_CON1_CONFIG_BLIND (0x98932ebdUL)
+#define TSM_CON1_CONFIG_DC_SRC (0xa1825ac3UL)
+#define TSM_CON1_CONFIG_PORT (0xe266628dUL)
+#define TSM_CON1_CONFIG_PPSIN_2_5V (0x6f05027fUL)
+#define TSM_CON1_CONFIG_SAMPLE_EDGE (0x2f2719adUL)
+#define TSM_CON1_SAMPLE_HI (0xc76be978UL)
+#define TSM_CON1_SAMPLE_HI_SEC (0xe639bab1UL)
+#define TSM_CON1_SAMPLE_LO (0x4a648949UL)
+#define TSM_CON1_SAMPLE_LO_NS (0x8edfe07bUL)
+#define TSM_CON2_CONFIG (0xbab6d40cUL)
+#define TSM_CON2_CONFIG_BLIND (0xe4f20b66UL)
+#define TSM_CON2_CONFIG_DC_SRC (0xb0ff30baUL)
+#define TSM_CON2_CONFIG_PORT (0x5fac0e43UL)
+#define TSM_CON2_CONFIG_PPSIN_2_5V (0xcc5384d6UL)
+#define TSM_CON2_CONFIG_SAMPLE_EDGE (0x808e5467UL)
+#define TSM_CON2_SAMPLE_HI (0x5e898f79UL)
+#define TSM_CON2_SAMPLE_HI_SEC (0xf744d0c8UL)
+#define TSM_CON2_SAMPLE_LO (0xd386ef48UL)
+#define TSM_CON2_SAMPLE_LO_NS (0xf2bec5a0UL)
+#define TSM_CON3_CONFIG (0x761cd492UL)
+#define TSM_CON3_CONFIG_BLIND (0x79fdea10UL)
+#define TSM_CON3_CONFIG_PORT (0x823ad7c6UL)
+#define TSM_CON3_CONFIG_SAMPLE_EDGE (0xe5e96f21UL)
+#define TSM_CON3_SAMPLE_HI (0x9f0750b9UL)
+#define TSM_CON3_SAMPLE_HI_SEC (0x4ebf0b20UL)
+#define TSM_CON3_SAMPLE_LO (0x12083088UL)
+#define TSM_CON3_SAMPLE_LO_NS (0x6fb124d6UL)
+#define TSM_CON4_CONFIG (0x7cd9dd8bUL)
+#define TSM_CON4_CONFIG_BLIND (0x1c3040d0UL)
+#define TSM_CON4_CONFIG_PORT (0xff49d19eUL)
+#define TSM_CON4_CONFIG_SAMPLE_EDGE (0x4adc9b2UL)
+#define TSM_CON4_SAMPLE_HI (0xb63c453aUL)
+#define TSM_CON4_SAMPLE_HI_SEC (0xd5be043aUL)
+#define TSM_CON4_SAMPLE_LO (0x3b33250bUL)
+#define TSM_CON4_SAMPLE_LO_NS (0xa7c8e16UL)
+#define TSM_CON5_CONFIG (0xb073dd15UL)
+#define TSM_CON5_CONFIG_BLIND (0x813fa1a6UL)
+#define TSM_CON5_CONFIG_PORT (0x22df081bUL)
+#define TSM_CON5_CONFIG_SAMPLE_EDGE (0x61caf2f4UL)
+#define TSM_CON5_SAMPLE_HI (0x77b29afaUL)
+#define TSM_CON5_SAMPLE_HI_SEC (0x6c45dfd2UL)
+#define TSM_CON5_SAMPLE_LO (0xfabdfacbUL)
+#define TSM_CON5_SAMPLE_LO_TIME (0x945d87e8UL)
+#define TSM_CON6_CONFIG (0x3efcdaf6UL)
+#define TSM_CON6_CONFIG_BLIND (0xfd5e847dUL)
+#define TSM_CON6_CONFIG_PORT (0x9f1564d5UL)
+#define TSM_CON6_CONFIG_SAMPLE_EDGE (0xce63bf3eUL)
+#define TSM_CON6_SAMPLE_HI (0xee50fcfbUL)
+#define TSM_CON6_SAMPLE_HI_SEC (0x7d38b5abUL)
+#define TSM_CON6_SAMPLE_LO (0x635f9ccaUL)
+#define TSM_CON6_SAMPLE_LO_NS (0xeb124abbUL)
+#define TSM_CON7_HOST_SAMPLE_HI (0xdcd90e52UL)
+#define TSM_CON7_HOST_SAMPLE_HI_SEC (0xd98d3618UL)
+#define TSM_CON7_HOST_SAMPLE_LO (0x51d66e63UL)
+#define TSM_CON7_HOST_SAMPLE_LO_NS (0x8f5594ddUL)
#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_NTTS_SRC (0x1b60227bUL)
+#define TSM_CONFIG_NTTS_SYNC (0x43e0a69dUL)
+#define TSM_CONFIG_TIMESET_EDGE (0x8c381127UL)
+#define TSM_CONFIG_TIMESET_SRC (0xe7590a31UL)
+#define TSM_CONFIG_TIMESET_UP (0x561980c1UL)
#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_INT_CONFIG (0x9a0d52dUL)
+#define TSM_INT_CONFIG_AUTO_DISABLE (0x9581470UL)
+#define TSM_INT_CONFIG_MASK (0xf00cd3d7UL)
+#define TSM_INT_STAT (0xa4611a70UL)
+#define TSM_INT_STAT_CAUSE (0x315168cfUL)
+#define TSM_INT_STAT_ENABLE (0x980a12d1UL)
+#define TSM_LED (0x6ae05f87UL)
+#define TSM_LED_LED0_BG_COLOR (0x897cf9eeUL)
+#define TSM_LED_LED0_COLOR (0x6d7ada39UL)
+#define TSM_LED_LED0_MODE (0x6087b644UL)
+#define TSM_LED_LED0_SRC (0x4fe29639UL)
+#define TSM_LED_LED1_BG_COLOR (0x66be92d0UL)
+#define TSM_LED_LED1_COLOR (0xcb0dd18dUL)
+#define TSM_LED_LED1_MODE (0xabdb65e1UL)
+#define TSM_LED_LED1_SRC (0x7282bf89UL)
+#define TSM_LED_LED2_BG_COLOR (0x8d8929d3UL)
+#define TSM_LED_LED2_COLOR (0xfae5cb10UL)
+#define TSM_LED_LED2_MODE (0x2d4f174fUL)
+#define TSM_LED_LED2_SRC (0x3522c559UL)
+#define TSM_NTTS_CONFIG (0x8bc38bdeUL)
+#define TSM_NTTS_CONFIG_AUTO_HARDSET (0xd75be25dUL)
+#define TSM_NTTS_CONFIG_EXT_CLK_ADJ (0x700425b6UL)
+#define TSM_NTTS_CONFIG_HIGH_SAMPLE (0x37135b7eUL)
+#define TSM_NTTS_CONFIG_TS_SRC_FORMAT (0x6e6e707UL)
+#define TSM_NTTS_EXT_STAT (0x2b0315b7UL)
+#define TSM_NTTS_EXT_STAT_MASTER_ID (0xf263315eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_REV (0xd543795eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_STAT (0x92d96f5eUL)
+#define TSM_NTTS_LIMIT_HI (0x1ddaa85fUL)
+#define TSM_NTTS_LIMIT_HI_SEC (0x315c6ef2UL)
+#define TSM_NTTS_LIMIT_LO (0x90d5c86eUL)
+#define TSM_NTTS_LIMIT_LO_NS (0xe6d94d9aUL)
+#define TSM_NTTS_OFFSET (0x6436e72UL)
+#define TSM_NTTS_OFFSET_NS (0x12d43a06UL)
+#define TSM_NTTS_SAMPLE_HI (0xcdc8aa3eUL)
+#define TSM_NTTS_SAMPLE_HI_SEC (0x4f6588fdUL)
+#define TSM_NTTS_SAMPLE_LO (0x40c7ca0fUL)
+#define TSM_NTTS_SAMPLE_LO_NS (0x6e43ff97UL)
+#define TSM_NTTS_STAT (0x6502b820UL)
+#define TSM_NTTS_STAT_NTTS_VALID (0x3e184471UL)
+#define TSM_NTTS_STAT_SIGNAL_LOST (0x178bedfdUL)
+#define TSM_NTTS_STAT_SYNC_LOST (0xe4cd53dfUL)
+#define TSM_NTTS_TS_T0_HI (0x1300d1b6UL)
+#define TSM_NTTS_TS_T0_HI_TIME (0xa016ae4fUL)
+#define TSM_NTTS_TS_T0_LO (0x9e0fb187UL)
+#define TSM_NTTS_TS_T0_LO_TIME (0x82006941UL)
+#define TSM_NTTS_TS_T0_OFFSET (0xbf70ce4fUL)
+#define TSM_NTTS_TS_T0_OFFSET_COUNT (0x35dd4398UL)
+#define TSM_PB_CTRL (0x7a8b60faUL)
+#define TSM_PB_CTRL_INSTMEM_WR (0xf96e2cbcUL)
+#define TSM_PB_CTRL_RESET (0xa38ade8bUL)
+#define TSM_PB_CTRL_RST (0x3aaa82f4UL)
+#define TSM_PB_INSTMEM (0xb54aeecUL)
+#define TSM_PB_INSTMEM_MEM_ADDR (0x9ac79b6eUL)
+#define TSM_PB_INSTMEM_MEM_DATA (0x65aefa38UL)
+#define TSM_PI_CTRL_I (0x8d71a4e2UL)
+#define TSM_PI_CTRL_I_VAL (0x98baedc9UL)
+#define TSM_PI_CTRL_KI (0xa1bd86cbUL)
+#define TSM_PI_CTRL_KI_GAIN (0x53faa916UL)
+#define TSM_PI_CTRL_KP (0xc5d62e0bUL)
+#define TSM_PI_CTRL_KP_GAIN (0x7723fa45UL)
+#define TSM_PI_CTRL_SHL (0xaa518701UL)
+#define TSM_PI_CTRL_SHL_VAL (0x56f56a6fUL)
+#define TSM_STAT (0xa55bf677UL)
+#define TSM_STAT_HARD_SYNC (0x7fff20fdUL)
+#define TSM_STAT_LINK_CON0 (0x216086f0UL)
+#define TSM_STAT_LINK_CON1 (0x5667b666UL)
+#define TSM_STAT_LINK_CON2 (0xcf6ee7dcUL)
+#define TSM_STAT_LINK_CON3 (0xb869d74aUL)
+#define TSM_STAT_LINK_CON4 (0x260d42e9UL)
+#define TSM_STAT_LINK_CON5 (0x510a727fUL)
+#define TSM_STAT_NTTS_INSYNC (0xb593a245UL)
+#define TSM_STAT_PTP_MI_PRESENT (0x43131eb0UL)
#define TSM_TIMER_CTRL (0x648da051UL)
#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
@@ -16,13 +166,40 @@
#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
#define TSM_TIMER_T1 (0x36752733UL)
#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HARDSET_HI (0xf28bdb46UL)
+#define TSM_TIME_HARDSET_HI_TIME (0x2d9a28baUL)
+#define TSM_TIME_HARDSET_LO (0x7f84bb77UL)
+#define TSM_TIME_HARDSET_LO_TIME (0xf8cefb4UL)
#define TSM_TIME_HI (0x175acea1UL)
#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
#define TSM_TIME_LO (0x9a55ae90UL)
#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TIME_RATE_ADJ (0xb1cc4bb1UL)
+#define TSM_TIME_RATE_ADJ_FRACTION (0xb7ab96UL)
#define TSM_TS_HI (0xccfe9e5eUL)
#define TSM_TS_HI_TIME (0xc23fed30UL)
#define TSM_TS_LO (0x41f1fe6fUL)
#define TSM_TS_LO_TIME (0xe0292a3eUL)
+#define TSM_TS_OFFSET (0x4b2e6e13UL)
+#define TSM_TS_OFFSET_NS (0x68c286b9UL)
+#define TSM_TS_STAT (0x64d41b8cUL)
+#define TSM_TS_STAT_OVERRUN (0xad9db92aUL)
+#define TSM_TS_STAT_SAMPLES (0xb6350e0bUL)
+#define TSM_TS_STAT_HI_OFFSET (0x1aa2ddf2UL)
+#define TSM_TS_STAT_HI_OFFSET_NS (0xeb040e0fUL)
+#define TSM_TS_STAT_LO_OFFSET (0x81218579UL)
+#define TSM_TS_STAT_LO_OFFSET_NS (0xb7ff33UL)
+#define TSM_TS_STAT_TAR_HI (0x65af24b6UL)
+#define TSM_TS_STAT_TAR_HI_SEC (0x7e92f619UL)
+#define TSM_TS_STAT_TAR_LO (0xe8a04487UL)
+#define TSM_TS_STAT_TAR_LO_NS (0xf7b3f439UL)
+#define TSM_TS_STAT_X (0x419f0ddUL)
+#define TSM_TS_STAT_X_NS (0xa48c3f27UL)
+#define TSM_TS_STAT_X2_HI (0xd6b1c517UL)
+#define TSM_TS_STAT_X2_HI_NS (0x4288c50fUL)
+#define TSM_TS_STAT_X2_LO (0x5bbea526UL)
+#define TSM_TS_STAT_X2_LO_NS (0x92633c13UL)
+#define TSM_UTC_OFFSET (0xf622a13aUL)
+#define TSM_UTC_OFFSET_SEC (0xd9c80209UL)
#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 61/73] net/ntnic: add xstats
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (59 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 60/73] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 62/73] net/ntnic: added flow statistics Serhii Iliushyk
` (15 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Extended statistics implementation and
initialization were added.
eth_dev_ops api was extended with new xstats apis.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 36 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 112 +++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +
drivers/net/ntnic/ntnic_mod_reg.h | 28 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 ++++++++++++++++++
7 files changed, 1022 insertions(+)
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 64351bcdc7..947c7ba3a1 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -13,6 +13,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
Basic stats = Y
+Extended stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 0735dbc085..4d4affa3cf 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -169,6 +169,39 @@ struct port_counters_v2 {
};
struct flm_counters_v1 {
+ /* FLM 0.17 */
+ uint64_t current;
+ uint64_t learn_done;
+ uint64_t learn_ignore;
+ uint64_t learn_fail;
+ uint64_t unlearn_done;
+ uint64_t unlearn_ignore;
+ uint64_t auto_unlearn_done;
+ uint64_t auto_unlearn_ignore;
+ uint64_t auto_unlearn_fail;
+ uint64_t timeout_unlearn_done;
+ uint64_t rel_done;
+ uint64_t rel_ignore;
+ /* FLM 0.20 */
+ uint64_t prb_done;
+ uint64_t prb_ignore;
+ uint64_t sta_done;
+ uint64_t inf_done;
+ uint64_t inf_skip;
+ uint64_t pck_hit;
+ uint64_t pck_miss;
+ uint64_t pck_unh;
+ uint64_t pck_dis;
+ uint64_t csh_hit;
+ uint64_t csh_miss;
+ uint64_t csh_unh;
+ uint64_t cuc_start;
+ uint64_t cuc_move;
+ /* FLM 0.17 Load */
+ uint64_t load_lps;
+ uint64_t load_aps;
+ uint64_t max_lps;
+ uint64_t max_aps;
};
struct nt4ga_stat_s {
@@ -200,6 +233,9 @@ struct nt4ga_stat_s {
struct host_buffer_counters *mp_stat_structs_hb;
struct port_load_counters *mp_port_load;
+ int flm_stat_ver;
+ struct flm_counters_v1 *mp_stat_structs_flm;
+
/* Rx/Tx totals: */
uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index a6c4fec0be..e59ac5bdb3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -31,6 +31,7 @@ sources = files(
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
'ntnic_filter/ntnic_filter.c',
+ 'ntnic_xstats/ntnic_xstats.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f94340f489..f6a74c7df2 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1496,6 +1496,113 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
return 0;
}
+static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats =
+ ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+
+ struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return dpdk_stats_reset(internals, p_nt_drv, if_index);
+}
+
+static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names(p_nt4ga_stat, xstats_names, size);
+}
+
+static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names_by_id(p_nt4ga_stat, xstats_names, ids,
+ size);
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1594,6 +1701,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
+ .xstats_get = eth_xstats_get,
+ .xstats_get_names = eth_xstats_get_names,
+ .xstats_reset = eth_xstats_reset,
+ .xstats_get_by_id = eth_xstats_get_by_id,
+ .xstats_get_names_by_id = eth_xstats_get_names_by_id,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 355e2032b1..6737d18a6f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -192,3 +192,18 @@ const struct rte_flow_ops *get_dev_flow_ops(void)
return dev_flow_ops;
}
+
+static struct ntnic_xstats_ops *ntnic_xstats_ops;
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops)
+{
+ ntnic_xstats_ops = ops;
+}
+
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void)
+{
+ if (ntnic_xstats_ops == NULL)
+ ntnic_xstats_ops_init();
+
+ return ntnic_xstats_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8703d478b6..65e7972c68 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,10 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_flow_driver.h"
+
#include "flow_api.h"
#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
@@ -354,4 +358,28 @@ void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
+struct ntnic_xstats_ops {
+ int (*nthw_xstats_get_names)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
+ int (*nthw_xstats_get)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port);
+ void (*nthw_xstats_reset)(nt4ga_stat_t *p_nt4ga_stat, uint8_t port);
+ int (*nthw_xstats_get_names_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size);
+ int (*nthw_xstats_get_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port);
+};
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops);
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void);
+void ntnic_xstats_ops_init(void);
+
#endif /* __NTNIC_MOD_REG_H__ */
diff --git a/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
new file mode 100644
index 0000000000..7604afe6a0
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
@@ -0,0 +1,829 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_ethdev.h>
+
+#include "include/ntdrv_4ga.h"
+#include "ntlog.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "stream_binary_flow_api.h"
+#include "ntnic_mod_reg.h"
+
+struct rte_nthw_xstats_names_s {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint8_t source;
+ unsigned int offset;
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.17
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v1[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.18
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v2[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * STA 0.9
+ */
+
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v3[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) },
+
+ /* FLM 0.17 */
+ { "flm_count_load_lps", 3, offsetof(struct flm_counters_v1, load_lps) },
+ { "flm_count_load_aps", 3, offsetof(struct flm_counters_v1, load_aps) },
+ { "flm_count_max_lps", 3, offsetof(struct flm_counters_v1, max_lps) },
+ { "flm_count_max_aps", 3, offsetof(struct flm_counters_v1, max_aps) },
+
+ { "rx_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps) },
+ { "rx_max_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps_max) },
+ { "rx_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps) },
+ { "rx_max_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps_max) },
+ { "tx_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps) },
+ { "tx_max_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps_max) },
+ { "tx_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps) },
+ { "tx_max_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps_max) }
+};
+
+#define NTHW_CAP_XSTATS_NAMES_V1 RTE_DIM(nthw_cap_xstats_names_v1)
+#define NTHW_CAP_XSTATS_NAMES_V2 RTE_DIM(nthw_cap_xstats_names_v2)
+#define NTHW_CAP_XSTATS_NAMES_V3 RTE_DIM(nthw_cap_xstats_names_v3)
+
+/*
+ * Container for the reset values
+ */
+#define NTHW_XSTATS_SIZE NTHW_CAP_XSTATS_NAMES_V3
+
+static uint64_t nthw_xstats_reset_val[NUM_ADAPTER_PORTS_MAX][NTHW_XSTATS_SIZE] = { 0 };
+
+/*
+ * These functions must only be called with stat mutex locked
+ */
+static int nthw_xstats_get(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n && i < nb_names; i++) {
+ stats[i].id = i;
+
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ stats[i].value = *((uint64_t *)&rx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 2:
+ /* TX stat */
+ stats[i].value = *((uint64_t *)&tx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ stats[i].value = *((uint64_t *)&flm_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[0][i];
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ stats[i].value = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ default:
+ stats[i].value = 0;
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int nthw_xstats_get_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+ int count = 0;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] < nb_names) {
+ switch (names[ids[i]].source) {
+ case 1:
+ /* RX stat */
+ values[i] = *((uint64_t *)&rx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 2:
+ /* TX stat */
+ values[i] = *((uint64_t *)&tx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ values[i] = *((uint64_t *)&flm_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[0][ids[i]];
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ values[i] = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ default:
+ values[i] = 0;
+ break;
+ }
+
+ count++;
+ }
+ }
+
+ return count;
+}
+
+static void nthw_xstats_reset(nt4ga_stat_t *p_nt4ga_stat, uint8_t port)
+{
+ unsigned int i;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < nb_names; i++) {
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&rx_ptr[names[i].offset]);
+ break;
+
+ case 2:
+ /* TX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&tx_ptr[names[i].offset]);
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ /* Reset makes no sense for flm_count_current */
+ /* Reset can't be used for load_lps, load_aps, max_lps and max_aps */
+ if (flm_ptr &&
+ (strcmp(names[i].name, "flm_count_current") != 0 &&
+ strcmp(names[i].name, "flm_count_load_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_load_aps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_aps") != 0)) {
+ nthw_xstats_reset_val[0][i] =
+ *((uint64_t *)&flm_ptr[names[i].offset]);
+ }
+
+ break;
+
+ case 4:
+ /* Port load stat*/
+ /* No reset */
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/*
+ * These functions does not require stat mutex locked
+ */
+static int nthw_xstats_get_names(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size && i < nb_names; i++) {
+ strlcpy(xstats_names[i].name, names[i].name, sizeof(xstats_names[i].name));
+ count++;
+ }
+
+ return count;
+}
+
+static int nthw_xstats_get_names_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] < nb_names) {
+ strlcpy(xstats_names[i].name,
+ names[ids[i]].name,
+ RTE_ETH_XSTATS_NAME_SIZE);
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+static struct ntnic_xstats_ops ops = {
+ .nthw_xstats_get_names = nthw_xstats_get_names,
+ .nthw_xstats_get = nthw_xstats_get,
+ .nthw_xstats_reset = nthw_xstats_reset,
+ .nthw_xstats_get_names_by_id = nthw_xstats_get_names_by_id,
+ .nthw_xstats_get_by_id = nthw_xstats_get_by_id
+};
+
+void ntnic_xstats_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "xstats module was initialized");
+ register_ntnic_xstats_ops(&ops);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 62/73] net/ntnic: added flow statistics
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (60 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 61/73] net/ntnic: add xstats Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 63/73] net/ntnic: add scrub registers Serhii Iliushyk
` (14 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
xstats was extended with flow statistics support.
Additional counters that shows learn, unlearn, lps, aps
and other.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 40 ++++
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +-
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 142 ++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.c | 176 ++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 52 ++++++
.../profile_inline/flow_api_profile_inline.c | 46 +++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +++++
drivers/net/ntnic/ntnic_ethdev.c | 132 +++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +
13 files changed, 656 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 3afc5b7853..8fedfdcd04 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -189,6 +189,24 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return -1;
}
+ if (get_flow_filter_ops() != NULL) {
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+ p_nt4ga_stat->flm_stat_ver = ndev->be.flm.ver;
+ p_nt4ga_stat->mp_stat_structs_flm = calloc(1, sizeof(struct flm_counters_v1));
+
+ if (!p_nt4ga_stat->mp_stat_structs_flm) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_APS_MAX, 0);
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_LPS_MAX, 0);
+ }
+
p_nt4ga_stat->mp_port_load =
calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
@@ -236,6 +254,7 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
return -1;
nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
@@ -542,6 +561,27 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
(uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
}
+ /* Update and get FLM stats */
+ flow_filter_ops->flow_get_flm_stats(ndev, (uint64_t *)p_nt4ga_stat->mp_stat_structs_flm,
+ sizeof(struct flm_counters_v1) / sizeof(uint64_t));
+
+ /*
+ * Calculate correct load values:
+ * rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ * bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) - 1ULL);
+ * load_aps = ((uint64_t)load_aps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ * load_lps = ((uint64_t)load_lps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ *
+ * Simplified it gives:
+ *
+ * load_lps = (load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ * load_aps = (load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ */
+
+ p_nt4ga_stat->mp_stat_structs_flm->load_aps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
+ p_nt4ga_stat->mp_stat_structs_flm->load_lps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
return 0;
}
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 17d5755634..9cd9d92823 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 38e4d0ca35..677aa7b6c8 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -17,6 +17,7 @@ typedef struct ntdrv_4ga_s {
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
rte_thread_t stat_thread;
+ rte_thread_t port_event_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e59ac5bdb3..c0b7729929 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -59,6 +59,7 @@ sources = files(
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
+ 'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index ce28fd2fa1..3d6bec2009 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1060,11 +1060,14 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
- (void)ndev;
- (void)data;
- (void)size;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ return -1;
+
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE)
+ return profile_inline_ops->flow_get_flm_stats_profile_inline(ndev, data, size);
- NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
return -1;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f4c29b8bde..1845f74166 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,148 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_stat_update(be->be_dev, &be->flm);
+}
+
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STAT_LRN_DONE:
+ *value = be->flm.v25.lrn_done->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_IGNORE:
+ *value = be->flm.v25.lrn_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_FAIL:
+ *value = be->flm.v25.lrn_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_DONE:
+ *value = be->flm.v25.unl_done->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_IGNORE:
+ *value = be->flm.v25.unl_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_DONE:
+ *value = be->flm.v25.rel_done->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_IGNORE:
+ *value = be->flm.v25.rel_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_DONE:
+ *value = be->flm.v25.prb_done->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_IGNORE:
+ *value = be->flm.v25.prb_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_DONE:
+ *value = be->flm.v25.aul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_IGNORE:
+ *value = be->flm.v25.aul_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_FAIL:
+ *value = be->flm.v25.aul_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_TUL_DONE:
+ *value = be->flm.v25.tul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_FLOWS:
+ *value = be->flm.v25.flows->cnt;
+ break;
+
+ case HW_FLM_LOAD_LPS:
+ *value = be->flm.v25.load_lps->lps;
+ break;
+
+ case HW_FLM_LOAD_APS:
+ *value = be->flm.v25.load_aps->aps;
+ break;
+
+ default: {
+ if (_VER_ < 18)
+ return UNSUP_FIELD;
+
+ switch (field) {
+ case HW_FLM_STAT_STA_DONE:
+ *value = be->flm.v25.sta_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_DONE:
+ *value = be->flm.v25.inf_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_SKIP:
+ *value = be->flm.v25.inf_skip->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_HIT:
+ *value = be->flm.v25.pck_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_MISS:
+ *value = be->flm.v25.pck_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_UNH:
+ *value = be->flm.v25.pck_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_DIS:
+ *value = be->flm.v25.pck_dis->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_HIT:
+ *value = be->flm.v25.csh_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_MISS:
+ *value = be->flm.v25.csh_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_UNH:
+ *value = be->flm.v25.csh_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_START:
+ *value = be->flm.v25.cuc_start->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_MOVE:
+ *value = be->flm.v25.cuc_move->cnt;
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+ }
+ break;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
new file mode 100644
index 0000000000..98b0e8347a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -0,0 +1,176 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+#include <rte_errno.h>
+
+#include "ntlog.h"
+#include "flm_evt_queue.h"
+
+/* Local queues for flm statistic events */
+static struct rte_ring *info_q_local[MAX_INFO_LCL_QUEUES];
+
+/* Remote queues for flm statistic events */
+static struct rte_ring *info_q_remote[MAX_INFO_RMT_QUEUES];
+
+/* Local queues for flm status records */
+static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
+
+/* Remote queues for flm status records */
+static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+
+
+static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
+{
+ static_assert((FLM_EVT_ELEM_SIZE & ~(size_t)3) == FLM_EVT_ELEM_SIZE,
+ "FLM EVENT struct size");
+ static_assert((FLM_STAT_ELEM_SIZE & ~(size_t)3) == FLM_STAT_ELEM_SIZE,
+ "FLM STAT struct size");
+ char name[20] = "NONE";
+ struct rte_ring *q;
+ uint32_t elem_size = 0;
+ uint32_t queue_size = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port >= MAX_INFO_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_INFO_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port >= MAX_INFO_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_INFO_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port >= MAX_STAT_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_STAT_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port >= MAX_STAT_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_STAT_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue create illegal caller: %u", caller);
+ return NULL;
+ }
+
+ q = rte_ring_create_elem(name,
+ elem_size,
+ queue_size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN, FILTER, "FLM queues cannot be created due to error %02X", rte_errno);
+ return NULL;
+ }
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ info_q_local[port] = q;
+ break;
+
+ case FLM_INFO_REMOTE:
+ info_q_remote[port] = q;
+ break;
+
+ case FLM_STAT_LOCAL:
+ stat_q_local[port] = q;
+ break;
+
+ case FLM_STAT_REMOTE:
+ stat_q_remote[port] = q;
+ break;
+
+ default:
+ break;
+ }
+
+ return q;
+}
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES) {
+ if (info_q_local[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_local[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_LOCAL) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES) {
+ if (info_q_remote[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_remote[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_REMOTE) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ return -ENOENT;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
new file mode 100644
index 0000000000..238be7a3b2
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_EVT_QUEUE_H_
+#define _FLM_EVT_QUEUE_H_
+
+#include "stdint.h"
+#include "stdbool.h"
+
+struct flm_status_event_s {
+ void *flow;
+ uint32_t learn_ignore : 1;
+ uint32_t learn_failed : 1;
+ uint32_t learn_done : 1;
+};
+
+struct flm_info_event_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum {
+ FLM_INFO_LOCAL,
+ FLM_INFO_REMOTE,
+ FLM_STAT_LOCAL,
+ FLM_STAT_REMOTE,
+};
+
+/* Max number of local queues */
+#define MAX_INFO_LCL_QUEUES 8
+#define MAX_STAT_LCL_QUEUES 8
+
+/* Max number of remote queues */
+#define MAX_INFO_RMT_QUEUES 128
+#define MAX_STAT_RMT_QUEUES 128
+
+/* queue size */
+#define FLM_EVT_QUEUE_SIZE 8192
+#define FLM_STAT_QUEUE_SIZE 8192
+
+/* Event element size */
+#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
+#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+
+#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index afb1c13f57..9c401f5ec2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4471,6 +4471,48 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
return 0;
}
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ const enum hw_flm_e fields[] = {
+ HW_FLM_STAT_FLOWS, HW_FLM_STAT_LRN_DONE, HW_FLM_STAT_LRN_IGNORE,
+ HW_FLM_STAT_LRN_FAIL, HW_FLM_STAT_UNL_DONE, HW_FLM_STAT_UNL_IGNORE,
+ HW_FLM_STAT_AUL_DONE, HW_FLM_STAT_AUL_IGNORE, HW_FLM_STAT_AUL_FAIL,
+ HW_FLM_STAT_TUL_DONE, HW_FLM_STAT_REL_DONE, HW_FLM_STAT_REL_IGNORE,
+ HW_FLM_STAT_PRB_DONE, HW_FLM_STAT_PRB_IGNORE,
+
+ HW_FLM_STAT_STA_DONE, HW_FLM_STAT_INF_DONE, HW_FLM_STAT_INF_SKIP,
+ HW_FLM_STAT_PCK_HIT, HW_FLM_STAT_PCK_MISS, HW_FLM_STAT_PCK_UNH,
+ HW_FLM_STAT_PCK_DIS, HW_FLM_STAT_CSH_HIT, HW_FLM_STAT_CSH_MISS,
+ HW_FLM_STAT_CSH_UNH, HW_FLM_STAT_CUC_START, HW_FLM_STAT_CUC_MOVE,
+
+ HW_FLM_LOAD_LPS, HW_FLM_LOAD_APS,
+ };
+
+ const uint64_t fields_cnt = sizeof(fields) / sizeof(enum hw_flm_e);
+
+ if (!ndev->flow_mgnt_prepared)
+ return 0;
+
+ if (size < fields_cnt)
+ return -1;
+
+ hw_mod_flm_stat_update(&ndev->be);
+
+ for (uint64_t i = 0; i < fields_cnt; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_stat_get(&ndev->be, fields[i], &value);
+ data[i] = (fields[i] == HW_FLM_STAT_FLOWS || fields[i] == HW_FLM_LOAD_LPS ||
+ fields[i] == HW_FLM_LOAD_APS)
+ ? value
+ : data[i] + value;
+
+ if (ndev->be.flm.ver < 18 && fields[i] == HW_FLM_STAT_PRB_IGNORE)
+ break;
+ }
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4487,6 +4529,10 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * Stats
+ */
+ .flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index c695842077..b44d3a7291 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -52,4 +52,10 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+/*
+ * Stats
+ */
+
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/rte_pmd_ntnic.h b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
new file mode 100644
index 0000000000..4a1ba18a5e
--- /dev/null
+++ b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
@@ -0,0 +1,43 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTNIC_EVENT_H_
+#define NTNIC_EVENT_H_
+
+#include <rte_ethdev.h>
+
+typedef struct ntnic_flm_load_s {
+ uint64_t lookup;
+ uint64_t lookup_maximum;
+ uint64_t access;
+ uint64_t access_maximum;
+} ntnic_flm_load_t;
+
+typedef struct ntnic_port_load_s {
+ uint64_t rx_pps;
+ uint64_t rx_pps_maximum;
+ uint64_t tx_pps;
+ uint64_t tx_pps_maximum;
+ uint64_t rx_bps;
+ uint64_t rx_bps_maximum;
+ uint64_t tx_bps;
+ uint64_t tx_bps_maximum;
+} ntnic_port_load_t;
+
+struct ntnic_flm_statistic_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum rte_ntnic_event_type {
+ RTE_NTNIC_FLM_LOAD_EVENT = RTE_ETH_EVENT_MAX,
+ RTE_NTNIC_PORT_LOAD_EVENT,
+ RTE_NTNIC_FLM_STATS_EVENT,
+};
+
+#endif /* NTNIC_EVENT_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f6a74c7df2..9c286a4f35 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,8 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_evt_queue.h"
+#include "rte_pmd_ntnic.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
@@ -1419,6 +1421,7 @@ drv_deinit(struct drv_s *p_drv)
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
}
/* stop adapter */
@@ -1711,6 +1714,123 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.rss_hash_conf_get = rss_hash_conf_get,
};
+/*
+ * Port event thread
+ */
+THREAD_FUNC port_event_thread_fn(void *context)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[internals->port_id];
+ uint8_t port_no = internals->port;
+
+ ntnic_flm_load_t flmdata;
+ ntnic_port_load_t portdata;
+
+ memset(&flmdata, 0, sizeof(flmdata));
+ memset(&portdata, 0, sizeof(portdata));
+
+ while (ndev != NULL && ndev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ /*
+ * FLM load measurement
+ * Do only send event, if there has been a change
+ */
+ if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
+ if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
+ flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
+ flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
+ flmdata.lookup_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps;
+ flmdata.access_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_FLM_LOAD_EVENT,
+ &flmdata);
+ }
+ }
+ }
+
+ /*
+ * Port load measurement
+ * Do only send event, if there has been a change.
+ */
+ if (p_nt4ga_stat->mp_port_load) {
+ if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
+ portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
+ portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
+ portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
+ portdata.tx_pps = p_nt4ga_stat->mp_port_load[port_no].tx_pps;
+ portdata.rx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_pps_max;
+ portdata.tx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_pps_max;
+ portdata.rx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
+ portdata.tx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_PORT_LOAD_EVENT,
+ &portdata);
+ }
+ }
+ }
+
+ /* Process events */
+ {
+ int count = 0;
+ bool do_wait = true;
+
+ while (count < 5000) {
+ /* Local FLM statistic events */
+ struct flm_info_event_s data;
+
+ if (flm_inf_queue_get(port_no, FLM_INFO_LOCAL, &data) == 0) {
+ if (eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ struct ntnic_flm_statistic_s event_data;
+ event_data.bytes = data.bytes;
+ event_data.packets = data.packets;
+ event_data.cause = data.cause;
+ event_data.id = data.id;
+ event_data.timestamp = data.timestamp;
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)
+ RTE_NTNIC_FLM_STATS_EVENT,
+ &event_data);
+ do_wait = false;
+ }
+ }
+
+ if (do_wait)
+ nt_os_wait_usec(10);
+
+ count++;
+ do_wait = true;
+ }
+ }
+ }
+
+ return THREAD_RETURN;
+}
+
/*
* Adapter flm stat thread
*/
@@ -2237,6 +2357,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+
+ /* Port event thread */
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
+ port_event_thread_fn, (void *)internals);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
}
return 0;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 65e7972c68..7325bd1ea8 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -290,6 +290,13 @@ struct profile_inline_ops {
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+ /*
+ * Stats
+ */
+ int (*flow_get_flm_stats_profile_inline)(struct flow_nic_dev *ndev,
+ uint64_t *data,
+ uint64_t size);
+
/*
* NT Flow FLM queue API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 63/73] net/ntnic: add scrub registers
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (61 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 62/73] net/ntnic: added flow statistics Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 64/73] net/ntnic: update documentation Serhii Iliushyk
` (13 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Scrub fields were added to the fpga map file
Remove duplicated macro
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 17 ++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 3 ---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 620968ceb6..f1033ca949 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -728,7 +728,7 @@ static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
{ FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
{ FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
{ FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
- { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 }, { FLM_LRN_DATA_SCRUB_PROF, 4, 712, 0x0000 },
{ FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
{ FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
{ FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
@@ -782,6 +782,18 @@ static nthw_fpga_field_init_s flm_scan_fields[] = {
{ FLM_SCAN_I, 16, 0, 0 },
};
+static nthw_fpga_field_init_s flm_scrub_ctrl_fields[] = {
+ { FLM_SCRUB_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_SCRUB_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scrub_data_fields[] = {
+ { FLM_SCRUB_DATA_DEL, 1, 12, 0 },
+ { FLM_SCRUB_DATA_INF, 1, 13, 0 },
+ { FLM_SCRUB_DATA_R, 4, 8, 0 },
+ { FLM_SCRUB_DATA_T, 8, 0, 0 },
+};
+
static nthw_fpga_field_init_s flm_status_fields[] = {
{ FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
{ FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
@@ -921,6 +933,8 @@ static nthw_fpga_register_init_s flm_registers[] = {
{ FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
{ FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
{ FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_SCRUB_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_scrub_ctrl_fields },
+ { FLM_SCRUB_DATA, 11, 14, NTHW_FPGA_REG_TYPE_WO, 0, 4, flm_scrub_data_fields },
{ FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
{ FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
{ FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
@@ -3058,6 +3072,7 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
+ { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 9c286a4f35..263b3ee7d4 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -47,9 +47,6 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1)
#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1)
-/* Max RSS queues */
-#define MAX_QUEUES 125
-
#define NUM_VQ_SEGS(_data_size_) \
({ \
size_t _size = (_data_size_); \
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 64/73] net/ntnic: update documentation
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (62 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 63/73] net/ntnic: add scrub registers Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
` (12 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Update required documentation
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 2c160ae592..e7e1cbcff7 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -40,6 +40,36 @@ Features
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
+- Multiple TX and RX queues.
+- Scattered and gather for TX and RX.
+- RSS hash
+- RSS key update
+- RSS based on VLAN or 5-tuple.
+- RSS using different combinations of fields: L3 only, L4 only or both, and
+ source only, destination only or both.
+- Several RSS hash keys, one for each flow type.
+- Default RSS operation with no hash key specification.
+- VLAN filtering.
+- RX VLAN stripping via raw decap.
+- TX VLAN insertion via raw encap.
+- Flow API.
+- Multiple process.
+- Tunnel types: GTP.
+- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
+ verification.
+- Support for multiple rte_flow groups.
+- Encapsulation and decapsulation of GTP data.
+- Packet modification: NAT, TTL decrement, DSCP tagging
+- Traffic mirroring.
+- Jumbo frame support.
+- Port and queue statistics.
+- RMON statistics in extended stats.
+- Flow metering, including meter policy API.
+- Link state information.
+- CAM and TCAM based matching.
+- Exact match of 140 million flows and policies.
+- Basic stats
+- Extended stats
Limitations
~~~~~~~~~~~
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 65/73] net/ntnic: added flow aged APIs
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (63 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 64/73] net/ntnic: update documentation Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
` (11 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
FLow aged API was added to the flow_filter_ops.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 71 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 88 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++
3 files changed, 182 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 3d6bec2009..d30f7ee2da 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1058,6 +1058,70 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+static int flow_get_aged_flows(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline_ops uninitialized");
+ return -1;
+ }
+
+ if (nb_contexts > 0 && !context) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "rte_flow_get_aged_flows - empty context";
+ return -1;
+ }
+
+ return profile_inline_ops->flow_get_aged_flows_profile_inline(dev, caller_id, context,
+ nb_contexts, error);
+}
+
+static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_info;
+ (void)queue_info;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_attr;
+ (void)queue_attr;
+ (void)nb_queue;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1086,6 +1150,13 @@ static const struct flow_filter_ops ops = {
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
+ .flow_get_aged_flows = flow_get_aged_flows,
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ .flow_info_get = flow_info_get,
+ .flow_configure = flow_configure,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index e2fce02afa..9f8670b32d 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -718,6 +718,91 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_get_aged_flows(internals->flw_dev, caller_id, context,
+ nb_contexts, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+/*
+ * NT Flow asynchronous operations API
+ */
+
+static int eth_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_info_get(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (struct rte_flow_port_info *)port_info,
+ (struct rte_flow_queue_info *)queue_info,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_configure(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (const struct rte_flow_port_attr *)port_attr,
+ nb_queue,
+ (const struct rte_flow_queue_attr **)queue_attr,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -844,6 +929,9 @@ static const struct rte_flow_ops dev_flow_ops = {
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
+ .get_aged_flows = eth_flow_get_aged_flows,
+ .info_get = eth_flow_info_get,
+ .configure = eth_flow_configure,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 7325bd1ea8..a199aff61f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -280,6 +280,12 @@ struct profile_inline_ops {
uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_aged_flows_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -348,6 +354,23 @@ struct flow_filter_ops {
struct rte_flow_error *error);
int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+ int (*flow_get_aged_flows)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
/*
* Other
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 66/73] net/ntnic: add aged API to the inline profile
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (64 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
` (10 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Added implementation for flow get aged API.
Module which operate with age queue was extended with
get, count and size operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../flow_api/profile_inline/flm_age_queue.c | 49 ++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 24 +++++++++
.../profile_inline/flow_api_profile_inline.c | 51 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 6 +++
5 files changed, 131 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index c0b7729929..8c6d02a5ec 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -58,6 +58,7 @@ sources = files(
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_age_queue.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
new file mode 100644
index 0000000000..f6f04009fe
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -0,0 +1,49 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <rte_ring.h>
+
+#include "ntlog.h"
+#include "flm_age_queue.h"
+
+/* Queues for flm aged events */
+static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue empty");
+
+ return ret;
+ }
+
+ return -ENOENT;
+}
+
+unsigned int flm_age_queue_count(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_count(age_queue[caller_id]);
+
+ return ret;
+}
+
+unsigned int flm_age_queue_get_size(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_get_size(age_queue[caller_id]);
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
new file mode 100644
index 0000000000..d61609cc01
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -0,0 +1,24 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_AGE_QUEUE_H_
+#define _FLM_AGE_QUEUE_H_
+
+#include "stdint.h"
+
+struct flm_age_event_s {
+ void *context;
+};
+
+/* Max number of event queues */
+#define MAX_EVT_AGE_QUEUES 256
+
+#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
+unsigned int flm_age_queue_count(uint16_t caller_id);
+unsigned int flm_age_queue_get_size(uint16_t caller_id);
+
+#endif /* _FLM_AGE_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9c401f5ec2..bcc61821ab 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -7,6 +7,7 @@
#include "nt_util.h"
#include "hw_mod_backend.h"
+#include "flm_age_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -4399,6 +4400,55 @@ static void dump_flm_data(const uint32_t *data, FILE *file)
}
}
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ unsigned int queue_size = flm_age_queue_get_size(caller_id);
+
+ if (queue_size == 0) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size is not configured";
+ return -1;
+ }
+
+ unsigned int queue_count = flm_age_queue_count(caller_id);
+
+ if (context == NULL)
+ return queue_count;
+
+ if (queue_count < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size contains fewer records than the expected output";
+ return -1;
+ }
+
+ if (queue_size < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Defined aged queue size is smaller than the expected output";
+ return -1;
+ }
+
+ uint32_t idx;
+
+ for (idx = 0; idx < nb_contexts; ++idx) {
+ struct flm_age_event_s obj;
+ int ret = flm_age_queue_get(caller_id, &obj);
+
+ if (ret != 0)
+ break;
+
+ context[idx] = obj.context;
+ }
+
+ return idx;
+}
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -4527,6 +4577,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b44d3a7291..e1934bc6a6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -48,6 +48,12 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
FILE *file,
struct rte_flow_error *error);
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 67/73] net/ntnic: add info and configure flow API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (65 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 68/73] net/ntnic: add aged flow event Serhii Iliushyk
` (9 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with flow info and create APIS.
Module which operate with age queue was extended with
create and free operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 19 +----
.../flow_api/profile_inline/flm_age_queue.c | 79 +++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 5 ++
.../profile_inline/flow_api_profile_inline.c | 59 ++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 9 +++
drivers/net/ntnic/ntnic_mod_reg.h | 9 +++
7 files changed, 168 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index ed96f77bc0..89f071d982 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -77,6 +77,9 @@ struct flow_eth_dev {
/* QSL_HSH index if RSS needed QSL v6+ */
int rss_target_id;
+ /* The size of buffer for aged out flow list */
+ uint32_t nb_aging_objects;
+
struct flow_eth_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d30f7ee2da..009b56c258 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1085,12 +1085,6 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_info;
- (void)queue_info;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1098,20 +1092,14 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_info_get_profile_inline(dev, caller_id, port_info,
+ queue_info, error);
}
static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_attr;
- (void)queue_attr;
- (void)nb_queue;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1119,7 +1107,8 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_configure_profile_inline(dev, caller_id, port_attr,
+ nb_queue, queue_attr, error);
}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index f6f04009fe..1022583c4f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -4,12 +4,91 @@
*/
#include <rte_ring.h>
+#include <rte_errno.h>
+#include <rte_stdatomic.h>
+#include <stdint.h>
#include "ntlog.h"
#include "flm_age_queue.h"
/* Queues for flm aged events */
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+static uint16_t age_event[MAX_EVT_AGE_PORTS];
+
+void flm_age_queue_free(uint8_t port, uint16_t caller_id)
+{
+ struct rte_ring *q = NULL;
+
+ if (port < MAX_EVT_AGE_PORTS)
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ q = age_queue[caller_id];
+ age_queue[caller_id] = NULL;
+ }
+
+ if (q != NULL)
+ rte_ring_free(q);
+}
+
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
+{
+ char name[20];
+ struct rte_ring *q = NULL;
+
+ if (rte_is_power_of_2(count) == false || count > RTE_RING_SZ_MASK) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue number of elements (%u) is invalid, must be power of 2, and not exceed %u",
+ count,
+ RTE_RING_SZ_MASK);
+ return NULL;
+ }
+
+ if (port >= MAX_EVT_AGE_PORTS) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_EVT_AGE_PORTS - 1);
+ return NULL;
+ }
+
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+
+ if (caller_id >= MAX_EVT_AGE_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for caller_id %u. Max supported caller_id is %u",
+ caller_id,
+ MAX_EVT_AGE_QUEUES - 1);
+ return NULL;
+ }
+
+ if (age_queue[caller_id] != NULL) {
+ NT_LOG(DBG, FILTER, "FLM aged event queue %u already created", caller_id);
+ return age_queue[caller_id];
+ }
+
+ snprintf(name, 20, "AGE_EVENT%u", caller_id);
+ q = rte_ring_create_elem(name,
+ FLM_AGE_ELEM_SIZE,
+ count,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created due to error %02X",
+ rte_errno);
+ return NULL;
+ }
+
+ age_queue[caller_id] = q;
+
+ return q;
+}
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index d61609cc01..9ff6ef6de0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -15,8 +15,13 @@ struct flm_age_event_s {
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
+/* Max number of event ports */
+#define MAX_EVT_AGE_PORTS 128
+
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index bcc61821ab..f4308ff3de 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4563,6 +4563,63 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ (void)queue_info;
+ (void)caller_id;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+ memset(port_info, 0, sizeof(struct rte_flow_port_info));
+
+ port_info->max_nb_aging_objects = dev->nb_aging_objects;
+
+ return res;
+}
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ (void)nb_queue;
+ (void)queue_attr;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (port_attr->nb_aging_objects > 0) {
+ if (dev->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ struct rte_ring *age_queue =
+ flm_age_queue_create(dev->port_id, caller_id, port_attr->nb_aging_objects);
+
+ if (age_queue == NULL) {
+ error->message = "Failed to allocate aging objects";
+ goto error_out;
+ }
+
+ dev->nb_aging_objects = port_attr->nb_aging_objects;
+ }
+
+ return res;
+
+error_out:
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+
+ if (port_attr->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ return -1;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4584,6 +4641,8 @@ static const struct profile_inline_ops ops = {
* Stats
*/
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
+ .flow_info_get_profile_inline = flow_info_get_profile_inline,
+ .flow_configure_profile_inline = flow_configure_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e1934bc6a6..ea1d9c31b2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -64,4 +64,13 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index a199aff61f..029b0ac4eb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -309,6 +309,15 @@ struct profile_inline_ops {
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
+
+ int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 68/73] net/ntnic: add aged flow event
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (66 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 23:22 ` Stephen Hemminger
2024-10-21 21:05 ` [PATCH v1 69/73] net/ntnic: add thread termination Serhii Iliushyk
` (8 subsequent siblings)
76 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Port thread was extended with new age event callback handler.
LRN, INF, STA registers getter setter was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 7 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 16 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 75 +++++++++++
.../flow_api/profile_inline/flm_age_queue.c | 28 ++++
.../flow_api/profile_inline/flm_age_queue.h | 12 ++
.../flow_api/profile_inline/flm_evt_queue.c | 20 +++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 121 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 16 +++
10 files changed, 299 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 9cd9d92823..92e1205640 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be);
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
@@ -695,6 +698,10 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
uint32_t *sta_word_cnt);
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 5635ac4524..a3f5e1d7f7 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -129,3 +129,19 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
pthread_mutex_unlock(&handle->mtx);
}
+
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
+
+ *caller_id = element->caller_id;
+ *type = element->type;
+ memcpy(flm_h, &element->handle, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index e190fe4a11..edb4f42729 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -20,4 +20,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
uint8_t type);
void ntnic_id_table_free_id(void *id_table, uint32_t id);
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 1845f74166..996abfb28d 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,52 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_buf_ctrl_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_buf_ctrl_mod_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value)
+{
+ int get = 1; /* Only get supported */
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_BUF_CTRL_LRN_FREE:
+ GET_SET(be->flm.v25.buf_ctrl->lrn_free, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_INF_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->inf_avail, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_STA_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->sta_avail, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_buf_ctrl_mod_get(be, field, value);
+}
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
{
return be->iface->flm_stat_update(be->be_dev, &be->flm);
@@ -887,3 +933,32 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
return ret;
}
+
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_INF_STA_DATA:
+ be->iface->flm_inf_sta_data_update(be->be_dev, &be->flm, inf_value,
+ inf_size, inf_word_cnt, sta_value,
+ sta_size, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index 1022583c4f..fc192ff05d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -15,6 +15,21 @@
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
static uint16_t age_event[MAX_EVT_AGE_PORTS];
+__rte_always_inline int flm_age_event_get(uint8_t port)
+{
+ return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_set(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_clear(uint8_t port)
+{
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
void flm_age_queue_free(uint8_t port, uint16_t caller_id)
{
struct rte_ring *q = NULL;
@@ -90,6 +105,19 @@ struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned
return q;
}
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue full");
+ }
+}
+
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 9ff6ef6de0..27154836c5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -12,6 +12,14 @@ struct flm_age_event_s {
void *context;
};
+/* Indicates why the flow info record was generated */
+#define INF_DATA_CAUSE_SW_UNLEARN 0
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED 1
+#define INF_DATA_CAUSE_NA 2
+#define INF_DATA_CAUSE_PERIODIC_FLOW_INFO 3
+#define INF_DATA_CAUSE_SW_PROBE 4
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT 5
+
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
@@ -20,8 +28,12 @@ struct flm_age_event_s {
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+int flm_age_event_get(uint8_t port);
+void flm_age_event_set(uint8_t port);
+void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 98b0e8347a..db9687714f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -138,6 +138,26 @@ static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
return q;
}
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
+{
+ struct rte_ring **stat_q = remote ? stat_q_remote : stat_q_local;
+
+ if (port >= (remote ? MAX_STAT_RMT_QUEUES : MAX_STAT_LCL_QUEUES))
+ return -1;
+
+ if (stat_q[port] == NULL) {
+ if (flm_evt_queue_create(port, remote ? FLM_STAT_REMOTE : FLM_STAT_LOCAL) == NULL)
+ return -1;
+ }
+
+ if (rte_ring_sp_enqueue_elem(stat_q[port], obj, FLM_STAT_ELEM_SIZE) != 0) {
+ NT_LOG(DBG, FILTER, "FLM local status queue full");
+ return -1;
+ }
+
+ return 0;
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 238be7a3b2..3a61f844b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,5 +48,6 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f4308ff3de..72e79b2f86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,7 @@
#include "hw_mod_backend.h"
#include "flm_age_queue.h"
+#include "flm_evt_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -19,6 +20,13 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#define DMA_BLOCK_SIZE 256
+#define DMA_OVERHEAD 20
+#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
+#define MAX_STA_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_STA_DATA)
+#define WORDS_PER_INF_DATA (sizeof(struct flm_v25_inf_data_s) / sizeof(uint32_t))
+#define MAX_INF_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_INF_DATA)
+
#define NT_FLM_MISS_FLOW_TYPE 0
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
@@ -70,14 +78,127 @@ static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
return r.num;
}
+static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
+{
+ if (caller_id < MAX_VDPA_PORTS + 1) {
+ *port = caller_id;
+ return true;
+ }
+
+ *port = caller_id - MAX_VDPA_PORTS - 1;
+ return false;
+}
+
+static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_inf_data_s *inf_data =
+ (struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, inf_data->id, &flm_h, &caller_id,
+ &type);
+
+ /* Check that received record hold valid meter statistics */
+ if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
+
+ age_event.context = fh->context;
+
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
+ break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
+ }
+ }
+ }
+}
+
+static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_sta_data_s *sta_data =
+ (struct flm_v25_sta_data_s *)&data[i * WORDS_PER_STA_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, sta_data->id, &flm_h, &caller_id,
+ &type);
+
+ if (type == 1) {
+ uint8_t port;
+ bool remote_caller = is_remote_caller(caller_id, &port);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+ ((struct flow_handle *)flm_h.p)->learn_ignored = 1;
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ struct flm_status_event_s data = {
+ .flow = flm_h.p,
+ .learn_ignore = sta_data->lis,
+ .learn_failed = sta_data->lfs,
+ };
+
+ flm_sta_queue_put(port, remote_caller, &data);
+ }
+ }
+}
+
static uint32_t flm_update(struct flow_eth_dev *dev)
{
static uint32_t inf_word_cnt;
static uint32_t sta_word_cnt;
+ uint32_t inf_data[DMA_BLOCK_SIZE];
+ uint32_t sta_data[DMA_BLOCK_SIZE];
+
+ if (inf_word_cnt >= WORDS_PER_INF_DATA || sta_word_cnt >= WORDS_PER_STA_DATA) {
+ uint32_t inf_records = inf_word_cnt / WORDS_PER_INF_DATA;
+
+ if (inf_records > MAX_INF_DATA_RECORDS_PER_READ)
+ inf_records = MAX_INF_DATA_RECORDS_PER_READ;
+
+ uint32_t sta_records = sta_word_cnt / WORDS_PER_STA_DATA;
+
+ if (sta_records > MAX_STA_DATA_RECORDS_PER_READ)
+ sta_records = MAX_STA_DATA_RECORDS_PER_READ;
+
+ hw_mod_flm_inf_sta_data_update_get(&dev->ndev->be, HW_FLM_FLOW_INF_STA_DATA,
+ inf_data, inf_records * WORDS_PER_INF_DATA,
+ &inf_word_cnt, sta_data,
+ sta_records * WORDS_PER_STA_DATA,
+ &sta_word_cnt);
+
+ if (inf_records > 0)
+ flm_mtr_read_inf_records(dev, inf_data, inf_records);
+
+ if (sta_records > 0)
+ flm_mtr_read_sta_records(dev, sta_data, sta_records);
+
+ return 1;
+ }
+
if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
return 1;
+ hw_mod_flm_buf_ctrl_update(&dev->ndev->be);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_INF_AVAIL, &inf_word_cnt);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_STA_AVAIL, &sta_word_cnt);
+
return inf_word_cnt + sta_word_cnt;
}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 263b3ee7d4..6cac8da17e 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,7 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_age_queue.h"
#include "profile_inline/flm_evt_queue.h"
#include "rte_pmd_ntnic.h"
@@ -1816,6 +1817,21 @@ THREAD_FUNC port_event_thread_fn(void *context)
}
}
+ /* AGED event */
+ /* Note: RTE_FLOW_PORT_FLAG_STRICT_QUEUE flag is not supported so
+ * event is always generated
+ */
+ int aged_event_count = flm_age_event_get(port_no);
+
+ if (aged_event_count > 0 && eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_FLOW_AGED,
+ NULL);
+ flm_age_event_clear(port_no);
+ do_wait = false;
+ }
+
if (do_wait)
nt_os_wait_usec(10);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 69/73] net/ntnic: add thread termination
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (67 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 68/73] net/ntnic: add aged flow event Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 70/73] net/ntnic: add age documentation Serhii Iliushyk
` (7 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Introduce clear_pdrv to unregister driver
from global tracking.
Modify drv_deinit to call clear_pdirv and ensure
safe termination.
Add flm sta and age event free.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../flow_api/profile_inline/flm_age_queue.c | 10 +++
.../flow_api/profile_inline/flm_age_queue.h | 1 +
.../flow_api/profile_inline/flm_evt_queue.c | 76 +++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 12 +++
5 files changed, 100 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index fc192ff05d..ad916a7bcc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -46,6 +46,16 @@ void flm_age_queue_free(uint8_t port, uint16_t caller_id)
rte_ring_free(q);
}
+void flm_age_queue_free_all(void)
+{
+ int i;
+ int j;
+
+ for (i = 0; i < MAX_EVT_AGE_PORTS; i++)
+ for (j = 0; j < MAX_EVT_AGE_QUEUES; j++)
+ flm_age_queue_free(i, j);
+}
+
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
{
char name[20];
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 27154836c5..55c410ac86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -32,6 +32,7 @@ int flm_age_event_get(uint8_t port);
void flm_age_event_set(uint8_t port);
void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+void flm_age_queue_free_all(void);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index db9687714f..761609a0ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -25,6 +25,82 @@ static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
/* Remote queues for flm status records */
static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+static void flm_inf_sta_queue_free(uint8_t port, uint8_t caller)
+{
+ struct rte_ring *q = NULL;
+
+ /* If queues is not created, then ignore and return */
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ q = info_q_local[port];
+ info_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ q = info_q_remote[port];
+ info_q_remote[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port < MAX_STAT_LCL_QUEUES && stat_q_local[port] != NULL) {
+ q = stat_q_local[port];
+ stat_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port < MAX_STAT_RMT_QUEUES && stat_q_remote[port] != NULL) {
+ q = stat_q_remote[port];
+ stat_q_remote[port] = NULL;
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ break;
+ }
+
+ if (q)
+ rte_ring_free(q);
+}
+
+void flm_inf_sta_queue_free_all(uint8_t caller)
+{
+ int count = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ count = MAX_INFO_LCL_QUEUES;
+ break;
+
+ case FLM_INFO_REMOTE:
+ count = MAX_INFO_RMT_QUEUES;
+ break;
+
+ case FLM_STAT_LOCAL:
+ count = MAX_STAT_LCL_QUEUES;
+ break;
+
+ case FLM_STAT_REMOTE:
+ count = MAX_STAT_RMT_QUEUES;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ return;
+ }
+
+ for (int i = 0; i < count; i++)
+ flm_inf_sta_queue_free(i, caller);
+}
static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 3a61f844b6..d61b282472 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -47,6 +47,7 @@ enum {
#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+void flm_inf_sta_queue_free_all(uint8_t caller);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 6cac8da17e..15374d3045 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1416,6 +1416,18 @@ drv_deinit(struct drv_s *p_drv)
p_drv->ntdrv.b_shutdown = true;
THREAD_JOIN(p_nt_drv->stat_thread);
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
+ /* Free all local flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_LOCAL);
+ /* Free all remote flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_REMOTE);
+ /* Free all aged flow event queues */
+ flm_age_queue_free_all();
+ }
+
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 70/73] net/ntnic: add age documentation
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (68 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 69/73] net/ntnic: add thread termination Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 71/73] net/ntnic: add meter API Serhii Iliushyk
` (6 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
ntnic.rst document was exntede with age feature specification.
ntnic.ini was extended with rte_flow action age support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 18 ++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 15 +++++++++------
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 947c7ba3a1..af2981ccf6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -33,6 +33,7 @@ udp = Y
vlan = Y
[rte_flow actions]
+age = Y
drop = Y
jump = Y
mark = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e7e1cbcff7..e5a8d71892 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -148,3 +148,21 @@ FILTER
To enable logging on all levels use wildcard in the following way::
--log-level=pmd.net.ntnic.*,8
+
+Flow Scanner
+------------
+
+Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
+When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
+``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
+
+Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
+
+There are list of characteristics that ``age timeout`` action has:
+ - functions only in group > 0;
+ - flow timeout is specified in seconds;
+ - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
+ - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
+ - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
+ - after flow is aged-out it's not automatically deleted;
+ - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..5be9660287 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -154,12 +154,15 @@ New Features
* **Updated Napatech ntnic net driver [EXPERIMENTAL].**
- * Updated supported version of the FPGA to 9563.55.49.
- * Extended and fixed logging.
- * Added NT flow filter initialization.
- * Added NT flow backend initialization.
- * Added initialization of FPGA modules related to flow HW offload.
- * Added basic handling of the virtual queues.
+ * Update supported version of the FPGA to 9563.55.49
+ * Fix Coverity issues
+ * Fix issues related to release 24.07
+ * Extended and fixed the implementation of the logging
+ * Added NT flow filter init API
+ * Added NT flow backend initialization API
+ * Added initialization of FPGA modules related to flow HW offload
+ * Added basic handling of the virtual queues
+ * Added age rte flow action support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 71/73] net/ntnic: add meter API
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (69 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 70/73] net/ntnic: add age documentation Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 72/73] net/ntnic: add meter module Serhii Iliushyk
` (5 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add meter API and implementation to the profile inline.
management functions were extended with meter flow support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 +
.../flow_api/profile_inline/flm_evt_queue.c | 21 +
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 534 +++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 34 ++
6 files changed, 578 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 89f071d982..032063712a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -100,6 +100,7 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *flm_mtr_handle;
void *group_handle;
void *hw_db_handle;
void *id_table_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 155a9e1fd6..8f1a6419f3 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -57,6 +57,7 @@ enum res_type_e {
#define MAX_TCAM_START_OFFSETS 4
+#define MAX_FLM_MTRS_SUPPORTED 4
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
@@ -215,6 +216,8 @@ struct nic_flow_def {
uint32_t jump_to_group;
+ uint32_t mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
int full_offload;
/*
@@ -307,6 +310,8 @@ struct flow_handle {
uint32_t flm_db_idx_counter;
uint32_t flm_db_idxs[RES_COUNT];
+ uint32_t flm_mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
uint32_t flm_data[10];
uint8_t flm_prot;
uint8_t flm_kid;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 761609a0ea..d76c7da568 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -234,6 +234,27 @@ int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
return 0;
}
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_local[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM local info queue full");
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_remote[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM remote info queue full");
+ }
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index d61b282472..ee8175cf25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,6 +48,7 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
void flm_inf_sta_queue_free_all(uint8_t caller);
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 72e79b2f86..1738e55fbe 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -20,6 +20,10 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#define FLM_MTR_PROFILE_SIZE 0x100000
+#define FLM_MTR_STAT_SIZE 0x1000000
+#define UINT64_MSB ((uint64_t)1 << 63)
+
#define DMA_BLOCK_SIZE 256
#define DMA_OVERHEAD 20
#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
@@ -45,8 +49,336 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
+#define POLICING_PARAMETER_OFFSET 4096
+#define SIZE_CONVERTER 1099.511627776
+
+struct flm_mtr_stat_s {
+ struct dual_buckets_s *buckets;
+ atomic_uint_fast64_t n_pkt;
+ atomic_uint_fast64_t n_bytes;
+ uint64_t n_pkt_base;
+ uint64_t n_bytes_base;
+ atomic_uint_fast64_t stats_mask;
+ uint32_t flm_id;
+};
+
+struct flm_mtr_shared_stats_s {
+ struct flm_mtr_stat_s *stats;
+ uint32_t size;
+ int shared;
+};
+
+struct flm_flow_mtr_handle_s {
+ struct dual_buckets_s {
+ uint16_t rate_a;
+ uint16_t rate_b;
+ uint16_t size_a;
+ uint16_t size_b;
+ } dual_buckets[FLM_MTR_PROFILE_SIZE];
+
+ struct flm_mtr_shared_stats_s *port_stats[UINT8_MAX];
+};
+
static void *flm_lrn_queue_arr;
+static int flow_mtr_supported(struct flow_eth_dev *dev)
+{
+ return hw_mod_flm_present(&dev->ndev->be) && dev->ndev->be.flm.nb_variant == 2;
+}
+
+static uint64_t flow_mtr_meter_policy_n_max(void)
+{
+ return FLM_MTR_PROFILE_SIZE;
+}
+
+static inline uint64_t convert_policing_parameter(uint64_t value)
+{
+ uint64_t limit = POLICING_PARAMETER_OFFSET;
+ uint64_t shift = 0;
+ uint64_t res = value;
+
+ while (shift < 15 && value >= limit) {
+ limit <<= 1;
+ ++shift;
+ }
+
+ if (shift != 0) {
+ uint64_t tmp = POLICING_PARAMETER_OFFSET * (1 << (shift - 1));
+
+ if (tmp > value) {
+ res = 0;
+
+ } else {
+ tmp = value - tmp;
+ res = tmp >> (shift - 1);
+ }
+
+ if (res >= POLICING_PARAMETER_OFFSET)
+ res = POLICING_PARAMETER_OFFSET - 1;
+
+ res = res | (shift << 12);
+ }
+
+ return res;
+}
+
+static int flow_mtr_set_profile(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a, uint64_t bucket_rate_b,
+ uint64_t bucket_size_b)
+{
+ struct flow_nic_dev *ndev = dev->ndev;
+ struct flm_flow_mtr_handle_s *handle =
+ (struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle;
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ /* Round rates up to nearest 128 bytes/sec and shift to 128 bytes/sec units */
+ bucket_rate_a = (bucket_rate_a + 127) >> 7;
+ bucket_rate_b = (bucket_rate_b + 127) >> 7;
+
+ buckets->rate_a = convert_policing_parameter(bucket_rate_a);
+ buckets->rate_b = convert_policing_parameter(bucket_rate_b);
+
+ /* Round size down to 38-bit int */
+ if (bucket_size_a > 0x3fffffffff)
+ bucket_size_a = 0x3fffffffff;
+
+ if (bucket_size_b > 0x3fffffffff)
+ bucket_size_b = 0x3fffffffff;
+
+ /* Convert size to units of 2^40 / 10^9. Output is a 28-bit int. */
+ bucket_size_a = bucket_size_a / SIZE_CONVERTER;
+ bucket_size_b = bucket_size_b / SIZE_CONVERTER;
+
+ buckets->size_a = convert_policing_parameter(bucket_size_a);
+ buckets->size_b = convert_policing_parameter(bucket_size_b);
+
+ return 0;
+}
+
+static int flow_mtr_set_policy(struct flow_eth_dev *dev, uint32_t policy_id, int drop)
+{
+ (void)dev;
+ (void)policy_id;
+ (void)drop;
+ return 0;
+}
+
+static uint32_t flow_mtr_meters_supported(struct flow_eth_dev *dev, uint8_t caller_id)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ return handle->port_stats[caller_id]->size;
+}
+
+static int flow_mtr_create_meter(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t mtr_id,
+ uint32_t profile_id,
+ uint32_t policy_id,
+ uint64_t stats_mask)
+{
+ (void)policy_id;
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ union flm_handles flm_h;
+ flm_h.idx = mtr_id;
+ uint32_t flm_id = ntnic_id_table_get_id(dev->ndev->id_table_handle, flm_h, caller_id, 2);
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = buckets->rate_a;
+ learn_record->size = buckets->size_a;
+ learn_record->fill = buckets->size_a;
+
+ learn_record->ft_mbr =
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE; /* FT to assign if MBR has been exceeded */
+
+ learn_record->ent = 1;
+ learn_record->op = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ if (stats_mask)
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ mtr_stat[mtr_id].buckets = buckets;
+ mtr_stat[mtr_id].flm_id = flm_id;
+ atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 3;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 0;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ /* Clear statistics so stats_mask prevents updates of counters on deleted meters */
+ atomic_store(&mtr_stat[mtr_id].stats_mask, 0);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, 0);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, 0);
+ mtr_stat[mtr_id].n_bytes_base = 0;
+ mtr_stat[mtr_id].n_pkt_base = 0;
+ mtr_stat[mtr_id].buckets = NULL;
+
+ ntnic_id_table_free_id(dev->ndev->id_table_handle, flm_id);
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = &handle->port_stats[caller_id]->stats[mtr_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = mtr_stat->flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = mtr_stat->buckets->rate_a;
+ learn_record->size = mtr_stat->buckets->size_a;
+ learn_record->adj = adjust_value;
+
+ learn_record->ft_mbr = NT_FLM_VIOLATING_MBR_FLOW_TYPE;
+
+ learn_record->ent = 1;
+ learn_record->op = 2;
+ learn_record->eor = 1;
+
+ if (atomic_load(&mtr_stat->stats_mask))
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
static void flm_setup_queues(void)
{
flm_lrn_queue_arr = flm_lrn_queue_create();
@@ -91,6 +423,8 @@ static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
for (uint32_t i = 0; i < records; ++i) {
struct flm_v25_inf_data_s *inf_data =
(struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
@@ -101,29 +435,62 @@ static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, u
&type);
/* Check that received record hold valid meter statistics */
- if (type == 1) {
- switch (inf_data->cause) {
- case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
- case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
- struct flow_handle *fh = (struct flow_handle *)flm_h.p;
- struct flm_age_event_s age_event;
- uint8_t port;
+ if (type == 2) {
+ uint64_t mtr_id = flm_h.idx;
+
+ if (mtr_id < handle->port_stats[caller_id]->size) {
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[caller_id]->stats;
+
+ /* Don't update a deleted meter */
+ uint64_t stats_mask = atomic_load(&mtr_stat[mtr_id].stats_mask);
+
+ if (stats_mask) {
+ atomic_store(&mtr_stat[mtr_id].n_pkt,
+ inf_data->packets | UINT64_MSB);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, inf_data->bytes);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, inf_data->packets);
+ struct flm_info_event_s stat_data;
+ bool remote_caller;
+ uint8_t port;
+
+ remote_caller = is_remote_caller(caller_id, &port);
+
+ /* Save stat data to flm stat queue */
+ stat_data.bytes = inf_data->bytes;
+ stat_data.packets = inf_data->packets;
+ stat_data.id = mtr_id;
+ stat_data.timestamp = inf_data->ts;
+ stat_data.cause = inf_data->cause;
+ flm_inf_queue_put(port, remote_caller, &stat_data);
+ }
+ }
- age_event.context = fh->context;
+ /* Check that received record hold valid flow data */
- is_remote_caller(caller_id, &port);
+ } else if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
- flm_age_queue_put(caller_id, &age_event);
- flm_age_event_set(port);
- }
- break;
+ age_event.context = fh->context;
- case INF_DATA_CAUSE_SW_UNLEARN:
- case INF_DATA_CAUSE_NA:
- case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
- case INF_DATA_CAUSE_SW_PROBE:
- default:
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
}
}
}
@@ -202,6 +569,42 @@ static uint32_t flm_update(struct flow_eth_dev *dev)
return inf_word_cnt + sta_word_cnt;
}
+static void flm_mtr_read_stats(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ *stats_mask = atomic_load(&mtr_stat[id].stats_mask);
+
+ if (*stats_mask) {
+ uint64_t pkt_1;
+ uint64_t pkt_2;
+ uint64_t nb;
+
+ do {
+ do {
+ pkt_1 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 & UINT64_MSB);
+
+ nb = atomic_load(&mtr_stat[id].n_bytes);
+ pkt_2 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 != pkt_2);
+
+ *green_pkt = pkt_1 - mtr_stat[id].n_pkt_base;
+ *green_bytes = nb - mtr_stat[id].n_bytes_base;
+
+ if (clear) {
+ mtr_stat[id].n_pkt_base = pkt_1;
+ mtr_stat[id].n_bytes_base = nb;
+ }
+ }
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -2512,6 +2915,13 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub, uint32_t priority)
{
(void)flm_scrub;
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = fh->dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[fh->caller_id]->stats;
+ fh->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
switch (fd->l4_prot) {
case PROT_L4_TCP:
fh->flm_prot = 6;
@@ -3544,6 +3954,29 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (ndev->id_table_handle == NULL)
goto err_exit0;
+ ndev->flm_mtr_handle = calloc(1, sizeof(struct flm_flow_mtr_handle_s));
+ struct flm_mtr_shared_stats_s *flm_shared_stats =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *flm_stats =
+ calloc(FLM_MTR_STAT_SIZE, sizeof(struct flm_mtr_stat_s));
+
+ if (ndev->flm_mtr_handle == NULL || flm_shared_stats == NULL ||
+ flm_stats == NULL) {
+ free(ndev->flm_mtr_handle);
+ free(flm_shared_stats);
+ free(flm_stats);
+ goto err_exit0;
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ ((struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle)->port_stats[i] =
+ flm_shared_stats;
+ }
+
+ flm_shared_stats->stats = flm_stats;
+ flm_shared_stats->size = FLM_MTR_STAT_SIZE;
+ flm_shared_stats->shared = UINT8_MAX;
+
if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
goto err_exit0;
@@ -3578,6 +4011,27 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ free(ndev->flm_mtr_handle);
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
@@ -4697,6 +5151,11 @@ int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
port_info->max_nb_aging_objects = dev->nb_aging_objects;
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle)
+ port_info->max_nb_meters = mtr_handle->port_stats[caller_id]->size;
+
return res;
}
@@ -4728,6 +5187,35 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
dev->nb_aging_objects = port_attr->nb_aging_objects;
}
+ if (port_attr->nb_meters > 0) {
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle->port_stats[caller_id]->shared == 1) {
+ res = realloc(mtr_handle->port_stats[caller_id]->stats,
+ port_attr->nb_meters) == NULL
+ ? -1
+ : 0;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+
+ } else {
+ mtr_handle->port_stats[caller_id] =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *stats =
+ calloc(port_attr->nb_meters, sizeof(struct flm_mtr_stat_s));
+
+ if (mtr_handle->port_stats[caller_id] == NULL || stats == NULL) {
+ free(mtr_handle->port_stats[caller_id]);
+ free(stats);
+ error->message = "Failed to allocate meter actions";
+ goto error_out;
+ }
+
+ mtr_handle->port_stats[caller_id]->stats = stats;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+ mtr_handle->port_stats[caller_id]->shared = 1;
+ }
+ }
+
return res;
error_out:
@@ -4767,8 +5255,18 @@ static const struct profile_inline_ops ops = {
/*
* NT Flow FLM Meter API
*/
+ .flow_mtr_supported = flow_mtr_supported,
+ .flow_mtr_meter_policy_n_max = flow_mtr_meter_policy_n_max,
+ .flow_mtr_set_profile = flow_mtr_set_profile,
+ .flow_mtr_set_policy = flow_mtr_set_policy,
+ .flow_mtr_create_meter = flow_mtr_create_meter,
+ .flow_mtr_probe_meter = flow_mtr_probe_meter,
+ .flow_mtr_destroy_meter = flow_mtr_destroy_meter,
+ .flm_mtr_adjust_stats = flm_mtr_adjust_stats,
+ .flow_mtr_meters_supported = flow_mtr_meters_supported,
.flm_setup_queues = flm_setup_queues,
.flm_free_queues = flm_free_queues,
+ .flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 029b0ac4eb..503674f4a4 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -303,10 +303,44 @@ struct profile_inline_ops {
uint64_t *data,
uint64_t size);
+ /*
+ * NT Flow FLM Meter API
+ */
+ int (*flow_mtr_supported)(struct flow_eth_dev *dev);
+
+ uint64_t (*flow_mtr_meter_policy_n_max)(void);
+
+ int (*flow_mtr_set_profile)(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a,
+ uint64_t bucket_rate_b, uint64_t bucket_size_b);
+
+ int (*flow_mtr_set_policy)(struct flow_eth_dev *dev, uint32_t policy_id, int drop);
+
+ int (*flow_mtr_create_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t profile_id, uint32_t policy_id, uint64_t stats_mask);
+
+ int (*flow_mtr_probe_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id);
+
+ int (*flow_mtr_destroy_meter)(struct flow_eth_dev *dev, uint8_t caller_id,
+ uint32_t mtr_id);
+
+ int (*flm_mtr_adjust_stats)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value);
+
+ uint32_t (*flow_mtr_meters_supported)(struct flow_eth_dev *dev, uint8_t caller_id);
+
/*
* NT Flow FLM queue API
*/
void (*flm_setup_queues)(void);
+ void (*flm_mtr_read_stats)(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear);
+
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 72/73] net/ntnic: add meter module
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (70 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 71/73] net/ntnic: add meter API Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 73/73] net/ntnic: add meter documentation Serhii Iliushyk
` (4 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Meter module was added:
1. add/remove profile
2. create/destroy flow
3. add/remove meter policy
4. read/update stats
eth_dev_ops struct was extended with ops above.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/ntos_drv.h | 14 +
drivers/net/ntnic/meson.build | 2 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 11 +-
drivers/net/ntnic/ntnic_mod_reg.c | 18 +
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
6 files changed, 538 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 7b3c8ff3d6..f6ce442d17 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -12,6 +12,7 @@
#include <inttypes.h>
#include <rte_ether.h>
+#include "rte_mtr.h"
#include "stream_binary_flow_api.h"
#include "nthw_drv.h"
@@ -90,6 +91,19 @@ struct __rte_cache_aligned ntnic_tx_queue {
enum fpga_info_profile profile; /* Inline / Capture */
};
+struct nt_mtr_profile {
+ LIST_ENTRY(nt_mtr_profile) next;
+ uint32_t profile_id;
+ struct rte_mtr_meter_profile profile;
+};
+
+struct nt_mtr {
+ LIST_ENTRY(nt_mtr) next;
+ uint32_t mtr_id;
+ int shared;
+ struct nt_mtr_profile *profile;
+};
+
struct pmd_internals {
const struct rte_pci_device *pci_dev;
struct flow_eth_dev *flw_dev;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 8c6d02a5ec..ca46541ef3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -17,6 +17,7 @@ includes = [
include_directories('nthw'),
include_directories('nthw/supported'),
include_directories('nthw/model'),
+ include_directories('nthw/ntnic_meter'),
include_directories('nthw/flow_filter'),
include_directories('nthw/flow_api'),
include_directories('nim/'),
@@ -92,6 +93,7 @@ sources = files(
'nthw/flow_filter/flow_nthw_tx_cpy.c',
'nthw/flow_filter/flow_nthw_tx_ins.c',
'nthw/flow_filter/flow_nthw_tx_rpl.c',
+ 'nthw/ntnic_meter/ntnic_meter.c',
'nthw/model/nthw_fpga_model.c',
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
new file mode 100644
index 0000000000..e4e8fe0c7d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -0,0 +1,483 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_meter.h>
+#include <rte_mtr.h>
+#include <rte_mtr_driver.h>
+#include <rte_malloc.h>
+
+#include "ntos_drv.h"
+#include "ntlog.h"
+#include "nt_util.h"
+#include "ntos_system.h"
+#include "ntnic_mod_reg.h"
+
+static inline uint8_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + (uint8_t)(port & 0x7f) + 1;
+}
+
+struct qos_integer_fractional {
+ uint32_t integer;
+ uint32_t fractional; /* 1/1024 */
+};
+
+/*
+ * Inline FLM metering
+ */
+
+static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
+ struct rte_mtr_capabilities *cap,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (!profile_inline_ops->flow_mtr_supported(internals->flw_dev)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Ethernet device does not support metering");
+ }
+
+ memset(cap, 0x0, sizeof(struct rte_mtr_capabilities));
+
+ /* MBR records use 28-bit integers */
+ cap->n_max = profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id);
+ cap->n_shared_max = cap->n_max;
+
+ cap->identical = 0;
+ cap->shared_identical = 0;
+
+ cap->shared_n_flows_per_mtr_max = UINT32_MAX;
+
+ /* Limited by number of MBR record ids per FLM learn record */
+ cap->chaining_n_mtrs_per_flow_max = 4;
+
+ cap->chaining_use_prev_mtr_color_supported = 0;
+ cap->chaining_use_prev_mtr_color_enforced = 0;
+
+ cap->meter_rate_max = (uint64_t)(0xfff << 0xf) * 1099;
+
+ cap->stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ /* Only color-blind mode is supported */
+ cap->color_aware_srtcm_rfc2697_supported = 0;
+ cap->color_aware_trtcm_rfc2698_supported = 0;
+ cap->color_aware_trtcm_rfc4115_supported = 0;
+
+ /* Focused on RFC2698 for now */
+ cap->meter_srtcm_rfc2697_n_max = 0;
+ cap->meter_trtcm_rfc2698_n_max = cap->n_max;
+ cap->meter_trtcm_rfc4115_n_max = 0;
+
+ cap->meter_policy_n_max = profile_inline_ops->flow_mtr_meter_policy_n_max();
+
+ /* Byte mode is supported */
+ cap->srtcm_rfc2697_byte_mode_supported = 0;
+ cap->trtcm_rfc2698_byte_mode_supported = 1;
+ cap->trtcm_rfc4115_byte_mode_supported = 0;
+
+ /* Packet mode not supported */
+ cap->srtcm_rfc2697_packet_mode_supported = 0;
+ cap->trtcm_rfc2698_packet_mode_supported = 0;
+ cap->trtcm_rfc4115_packet_mode_supported = 0;
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (profile->packet_mode != 0) {
+ return -rte_mtr_error_set(error, EINVAL,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_PACKET_MODE, NULL,
+ "Profile packet mode not supported");
+ }
+
+ if (profile->alg == RTE_MTR_SRTCM_RFC2697) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 2697 not supported");
+ }
+
+ if (profile->alg == RTE_MTR_TRTCM_RFC4115) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 4115 not supported");
+ }
+
+ if (profile->trtcm_rfc2698.cir != profile->trtcm_rfc2698.pir ||
+ profile->trtcm_rfc2698.cbs != profile->trtcm_rfc2698.pbs) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile committed and peak rates must be equal");
+ }
+
+ int res = profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id,
+ profile->trtcm_rfc2698.cir,
+ profile->trtcm_rfc2698.cbs, 0, 0);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile could not be added.");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id, 0, 0, 0, 0);
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t policy_id,
+ struct rte_mtr_meter_policy_params *policy,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ const struct rte_flow_action *actions = policy->actions[RTE_COLOR_GREEN];
+ int green_action_supported = (actions[0].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_VOID &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END);
+
+ actions = policy->actions[RTE_COLOR_YELLOW];
+ int yellow_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ actions = policy->actions[RTE_COLOR_RED];
+ int red_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ if (green_action_supported == 0 || yellow_action_supported == 0 ||
+ red_action_supported == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Unsupported meter policy actions");
+ }
+
+ if (profile_inline_ops->flow_mtr_set_policy(internals->flw_dev, policy_id, 1)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Policy could not be added");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_delete_inline(struct rte_eth_dev *eth_dev __rte_unused,
+ uint32_t policy_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ return 0;
+}
+
+static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (params->use_prev_mtr_color != 0 || params->dscp_table != NULL) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only color blind mode is supported");
+ }
+
+ uint64_t allowed_stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ if ((params->stats_mask & ~allowed_stats_mask) != 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Requested color stats not supported");
+ }
+
+ if (params->meter_enable == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Disabled meters not supported");
+ }
+
+ if (shared == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only shared mtrs are supported");
+ }
+
+ if (params->meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (params->meter_policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ int res = profile_inline_ops->flow_mtr_create_meter(internals->flw_dev,
+ caller_id,
+ mtr_id,
+ params->meter_profile_id,
+ params->meter_policy_id,
+ params->stats_mask);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_destroy_meter(internals->flw_dev, caller_id, mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ uint64_t adjust_value,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ const uint64_t adjust_bit = 1ULL << 63;
+ const uint64_t probe_bit = 1ULL << 62;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (adjust_value & adjust_bit) {
+ adjust_value &= adjust_bit - 1;
+
+ if (adjust_value > (uint64_t)UINT32_MAX) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "Adjust value is out of range");
+ }
+
+ if (profile_inline_ops->flm_mtr_adjust_stats(internals->flw_dev, caller_id, mtr_id,
+ (uint32_t)adjust_value)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to adjust offloaded MTR");
+ }
+
+ return 0;
+ }
+
+ if (adjust_value & probe_bit) {
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_probe_meter(internals->flw_dev, caller_id,
+ mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to offload to hardware");
+ }
+
+ return 0;
+ }
+
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Use of meter stats update requires bit 63 or bit 62 of \"stats_mask\" must be 1.");
+}
+
+static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ memset(stats, 0x0, sizeof(struct rte_mtr_stats));
+ profile_inline_ops->flm_mtr_read_stats(internals->flw_dev, caller_id, mtr_id, stats_mask,
+ &stats->n_pkts[RTE_COLOR_GREEN],
+ &stats->n_bytes[RTE_COLOR_GREEN], clear);
+
+ return 0;
+}
+
+/*
+ * Ops setup
+ */
+
+static const struct rte_mtr_ops mtr_ops_inline = {
+ .capabilities_get = eth_mtr_capabilities_get_inline,
+ .meter_profile_add = eth_mtr_meter_profile_add_inline,
+ .meter_profile_delete = eth_mtr_meter_profile_delete_inline,
+ .create = eth_mtr_create_inline,
+ .destroy = eth_mtr_destroy_inline,
+ .meter_policy_add = eth_mtr_meter_policy_add_inline,
+ .meter_policy_delete = eth_mtr_meter_policy_delete_inline,
+ .stats_update = eth_mtr_stats_adjust_inline,
+ .stats_read = eth_mtr_stats_read_inline,
+};
+
+static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
+ enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
+
+ switch (profile) {
+ case FPGA_INFO_PROFILE_INLINE:
+ *(const struct rte_mtr_ops **)ops = &mtr_ops_inline;
+ break;
+
+ case FPGA_INFO_PROFILE_UNKNOWN:
+
+ /* fallthrough */
+ case FPGA_INFO_PROFILE_CAPTURE:
+
+ /* fallthrough */
+ default:
+ NT_LOG(ERR, NTHW, "" PCIIDENT_PRINT_STR ": fpga profile not supported",
+ PCIIDENT_TO_DOMAIN(p_nt_drv->pciident),
+ PCIIDENT_TO_BUSNR(p_nt_drv->pciident),
+ PCIIDENT_TO_DEVNR(p_nt_drv->pciident),
+ PCIIDENT_TO_FUNCNR(p_nt_drv->pciident));
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct meter_ops_s meter_ops = {
+ .eth_mtr_ops_get = eth_mtr_ops_get,
+};
+
+void meter_init(void)
+{
+ NT_LOG(DBG, NTNIC, "Meter ops initialized");
+ register_meter_ops(&meter_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 15374d3045..f7503b62ab 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1690,7 +1690,7 @@ static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_con
return 0;
}
-static const struct eth_dev_ops nthw_eth_dev_ops = {
+struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
.dev_stop = eth_dev_stop,
@@ -1713,6 +1713,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .mtr_ops_get = NULL,
.flow_ops_get = dev_flow_ops_get,
.xstats_get = eth_xstats_get,
.xstats_get_names = eth_xstats_get_names,
@@ -2176,6 +2177,14 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ const struct meter_ops_s *meter_ops = get_meter_ops();
+
+ if (meter_ops != NULL)
+ nthw_eth_dev_ops.mtr_ops_get = meter_ops->eth_mtr_ops_get;
+
+ else
+ NT_LOG(DBG, NTNIC, "Meter module is not initialized");
+
/* Initialize the queue system */
if (err == 0) {
sg_ops = get_sg_ops();
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 6737d18a6f..8d4a11feba 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,24 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+/*
+ *
+ */
+static struct meter_ops_s *meter_ops;
+
+void register_meter_ops(struct meter_ops_s *ops)
+{
+ meter_ops = ops;
+}
+
+const struct meter_ops_s *get_meter_ops(void)
+{
+ if (meter_ops == NULL)
+ meter_init();
+
+ return meter_ops;
+}
+
static const struct ntnic_filter_ops *ntnic_filter_ops;
void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 503674f4a4..147d8b2acb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -9,6 +9,8 @@
#include <stdint.h>
#include "rte_ethdev.h"
+#include "rte_mtr_driver.h"
+
#include "rte_flow_driver.h"
#include "flow_api.h"
@@ -115,6 +117,15 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+/* Meter ops section */
+struct meter_ops_s {
+ int (*eth_mtr_ops_get)(struct rte_eth_dev *eth_dev, void *ops);
+};
+
+void register_meter_ops(struct meter_ops_s *ops);
+const struct meter_ops_s *get_meter_ops(void);
+void meter_init(void);
+
struct ntnic_filter_ops {
int (*poll_statistics)(struct pmd_internals *internals);
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v1 73/73] net/ntnic: add meter documentation
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (71 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 72/73] net/ntnic: add meter module Serhii Iliushyk
@ 2024-10-21 21:05 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (3 subsequent siblings)
76 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-21 21:05 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
ntnic.ini was extended with rte_flow action meter support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 3 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index af2981ccf6..ecb0605de6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -43,3 +43,4 @@ queue = Y
raw_decap = Y
raw_encap = Y
rss = Y
+meter = Y
\ No newline at end of file
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e5a8d71892..4ae94b161c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -70,6 +70,7 @@ Features
- Exact match of 140 million flows and policies.
- Basic stats
- Extended stats
+- Flow metering, including meter policy API.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 5be9660287..b4a0bdf245 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -163,6 +163,7 @@ New Features
* Added initialization of FPGA modules related to flow HW offload
* Added basic handling of the virtual queues
* Added age rte flow action support
+ * Added meter flow metering and flow policy support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v1 37/73] net/ntnic: add flow dump feature
2024-10-21 21:04 ` [PATCH v1 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-21 23:10 ` Stephen Hemminger
0 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-21 23:10 UTC (permalink / raw)
To: Serhii Iliushyk
Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit, Oleksandr Kolomeiets
On Mon, 21 Oct 2024 23:04:39 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> +void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
> + uint32_t size, FILE *file)
> +{
> + (void)ndev;
Use __rte_unused, it is cleaner to read then.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs
2024-10-21 21:04 ` [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-21 23:12 ` Stephen Hemminger
0 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-21 23:12 UTC (permalink / raw)
To: Serhii Iliushyk
Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit, dvo-plv
On Mon, 21 Oct 2024 23:04:54 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> truct __rte_aligned(8) virtq_avail {
> +struct __rte_packed __rte_aligned(1) virtq_avail {
> uint16_t flags;
> uint16_t idx;
> uint16_t ring[]; /* Queue Size */
> };
>
> -struct __rte_aligned(8) virtq_used_elem {
> +struct __rte_packed __rte_aligned(1) virtq_used_elem {
> /* Index of start of used descriptor chain. */
> uint32_t id;
> /* Total length of the descriptor chain which was used (written to) */
> uint32_t len;
> };
>
> -struct __rte_aligned(8) virtq_used {
> +struct __rte_packed __rte_aligned(1) virtq_used {
> uint16_t flags;
> uint16_t idx;
> struct virtq_used_elem ring[]; /* Queue Size */
If you use __rte_packed doesn't it assume no alignment.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v1 68/73] net/ntnic: add aged flow event
2024-10-21 21:05 ` [PATCH v1 68/73] net/ntnic: add aged flow event Serhii Iliushyk
@ 2024-10-21 23:22 ` Stephen Hemminger
0 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-21 23:22 UTC (permalink / raw)
To: Serhii Iliushyk
Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
On Mon, 21 Oct 2024 23:05:10 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> From: Danylo Vodopianov <dvo-plv@napatech.com>
>
> Port thread was extended with new age event callback handler.
> LRN, INF, STA registers getter setter was added.
>
> Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
> ---
This patch and other parts of the flow API have problems if built
with stdatomic and Clang. It is missing use of RTE_ATOMIC()
../drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c:20:9: error: address argument to atomic operation must be a pointer to _Atomic type ('uint16_t *' (aka 'unsigned short *') invalid)
return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
^ ~~~~~~~~~~~~~~~~
../lib/eal/include/rte_stdatomic.h:69:2: note: expanded from macro 'rte_atomic_load_explicit'
atomic_load_explicit(ptr, memorder)
^ ~~~
/usr/lib/llvm-16/lib/clang/16/include/stdatomic.h:134:30: note: expanded from macro 'atomic_load_explicit'
#define atomic_load_explicit __c11_atomic_load
^
../drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c:25:2: error: address argument to atomic operation must be a pointer to _Atomic type ('uint16_t *' (aka 'unsigned short *') invalid)
rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
^ ~~~~~~~~~~~~~~~~
../lib/eal/include/rte_stdatomic.h:72:2: note: expanded from macro 'rte_atomic_store_explicit'
atomic_store_explicit(ptr, val, memorder)
^ ~~~
/usr/lib/llvm-16/lib/clang/16/include/stdatomic.h:131:31: note: expanded from macro 'atomic_store_explicit'
#define atomic_store_explicit __c11_atomic_store
^
../drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c:30:2: error: member reference base type 'uint16_t' (aka 'unsigned short') is not a structure or union
rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/eal/include/rte_stdatomic.h:109:2: note: expanded from macro 'rte_atomic_flag_clear_explicit'
atomic_flag_clear_explicit(ptr, memorder)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/llvm-16/lib/clang/16/include/stdatomic.h:181:79: note: expanded from macro 'atomic_flag_clear_explicit'
#define atomic_flag_clear_explicit(object, order) __c11_atomic_store(&(object)->_Value, 0, order)
~~~~~~~~^ ~~~~~~
../drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c:38:3: error: member reference base type 'uint16_t' (aka 'unsigned short') is not a structure or union
rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/eal/include/rte_stdatomic.h:109:2: note: expanded from macro 'rte_atomic_flag_clear_explicit'
atomic_flag_clear_explicit(ptr, memorder)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/llvm-16/lib/clang/16/include/stdatomic.h:181:79: note: expanded from macro 'atomic_flag_clear_explicit'
#define atomic_flag_clear_explicit(object, order) __c11_atomic_store(&(object)->_Value, 0, order)
~~~~~~~~^ ~~~~~~
../drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c:82:2: error: member reference base type 'uint16_t' (aka 'unsigned short') is not a structure or union
rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/eal/include/rte_stdatomic.h:109:2: note: expanded from macro 'rte_atomic_flag_clear_explicit'
atomic_flag_clear_explicit(ptr, memorder)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/llvm-16/lib/clang/16/include/stdatomic.h:181:79: note: expanded from macro 'atomic_flag_clear_explicit'
#define atomic_flag_clear_explicit(object, order) __c11_atomic_store(&(object)->_Value, 0, order)
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 00/73] Provide flow filter API and statistics
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (72 preceding siblings ...)
2024-10-21 21:05 ` [PATCH v1 73/73] net/ntnic: add meter documentation Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
` (73 more replies)
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (2 subsequent siblings)
76 siblings, 74 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The list of updates provided by the patchset:
* Multiple TX and RX queues.
* Scattered and gather for TX and RX.
* RSS hash
* RSS key update
* RSS based on VLAN or 5-tuple.
* RSS using different combinations of fields: L3 only,
L4 only or both, and source only, destination only or both.
* Several RSS hash keys, one for each flow type.
* Default RSS operation with no hash key specification.
* VLAN filtering.
* RX VLAN stripping via raw decap.
* TX VLAN insertion via raw encap.
* Flow API.
* Multiple process.
* Tunnel types: GTP.
* Tunnel HW offload: Packet type, inner/outer RSS,
IP and UDP checksum verification.
* Support for multiple rte_flow groups.
* Encapsulation and decapsulation of GTP data.
* Packet modification: NAT, TTL decrement, DSCP tagging
* Traffic mirroring.
* Jumbo frame support.
* Port and queue statistics.
* RMON statistics in extended stats.
* Flow metering, including meter policy API.
* Link state information.
* CAM and TCAM based matching.
* Exact match of 140 million flows and policies.
* Basic stats
* Extended stats
* Flow metering, including meter policy API.
Danylo Vodopianov (34):
net/ntnic: add API for configuration NT flow dev
net/ntnic: add item UDP
net/ntnic: add action TCP
net/ntnic: add action VLAN
net/ntnic: add item SCTP
net/ntnic: add items IPv6 and ICMPv6
net/ntnic: add action modify filed
net/ntnic: add items gtp and actions raw encap/decap
net/ntnic: add cat module
net/ntnic: add SLC LR module
net/ntnic: add PDB module
net/ntnic: add QSL module
net/ntnic: add KM module
net/ntnic: add hash API
net/ntnic: add TPE module
net/ntnic: add FLM module
net/ntnic: add flm rcp module
net/ntnic: add learn flow queue handling
net/ntnic: match and action db attributes were added
net/ntnic: add statistics API
net/ntnic: add rpf module
net/ntnic: add statistics poll
net/ntnic: added flm stat interface
net/ntnic: add tsm module
net/ntnic: add xstats
net/ntnic: added flow statistics
net/ntnic: add scrub registers
net/ntnic: added flow aged APIs
net/ntnic: add aged API to the inline profile
net/ntnic: add info and configure flow API
net/ntnic: add aged flow event
net/ntnic: add thread termination
net/ntnic: add meter module
net/ntnic: add meter documentation
Oleksandr Kolomeiets (17):
net/ntnic: add flow dump feature
net/ntnic: add flow flush
net/ntnic: sort FPGA registers alphanumerically
net/ntnic: add MOD CSU
net/ntnic: add MOD FLM
net/ntnic: add HFU module
net/ntnic: add IFR module
net/ntnic: add MAC Rx module
net/ntnic: add MAC Tx module
net/ntnic: add RPP LR module
net/ntnic: add MOD SLC LR
net/ntnic: add Tx CPY module
net/ntnic: add Tx INS module
net/ntnic: add Tx RPL module
net/ntnic: add STA module
net/ntnic: add TSM module
net/ntnic: update documentation
Serhii Iliushyk (22):
net/ntnic: add flow filter API
net/ntnic: add minimal create/destroy flow operations
net/ntnic: add internal flow create/destroy API
net/ntnic: add minimal NT flow inline profile
net/ntnic: add management API for NT flow profile
net/ntnic: add NT flow profile management implementation
net/ntnic: add create/destroy implementation for NT flows
net/ntnic: add infrastructure for for flow actions and items
net/ntnic: add action queue
net/ntnic: add action mark
net/ntnic: add ation jump
net/ntnic: add action drop
net/ntnic: add item eth
net/ntnic: add item IPv4
net/ntnic: add item ICMP
net/ntnic: add item port ID
net/ntnic: add item void
net/ntnic: add GMF (Generic MAC Feeder) module
net/ntnic: update alignment for virt queue structs
net/ntnic: enable RSS feature
net/ntnic: add age documentation
net/ntnic: add meter API
doc/guides/nics/features/ntnic.ini | 32 +
doc/guides/nics/ntnic.rst | 49 +
doc/guides/rel_notes/release_24_11.rst | 16 +-
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 598 ++
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 +-
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 73 +
drivers/net/ntnic/include/flow_api.h | 138 +
drivers/net/ntnic/include/flow_api_engine.h | 314 +
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 248 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 4 +
drivers/net/ntnic/include/ntnic_stat.h | 265 +
drivers/net/ntnic/include/ntos_drv.h | 24 +
.../ntnic/include/stream_binary_flow_api.h | 67 +
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 +
drivers/net/ntnic/meson.build | 20 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 6 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 +
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 30 +
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 +
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 759 +++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 99 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 147 +
.../net/ntnic/nthw/flow_api/flow_id_table.h | 26 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1171 ++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 457 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 640 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 +++
.../flow_api/profile_inline/flm_age_queue.c | 166 +
.../flow_api/profile_inline/flm_age_queue.h | 42 +
.../flow_api/profile_inline/flm_evt_queue.c | 293 +
.../flow_api/profile_inline/flm_evt_queue.h | 55 +
.../flow_api/profile_inline/flm_lrn_queue.c | 70 +
.../flow_api/profile_inline/flm_lrn_queue.h | 25 +
.../profile_inline/flow_api_hw_db_inline.c | 2851 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 374 ++
.../profile_inline/flow_api_profile_inline.c | 5272 +++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 76 +
.../flow_api_profile_inline_config.h | 129 +
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 +
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 498 ++
.../supported/nthw_fpga_9563_055_049_0000.c | 3317 +++++++----
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 11 +-
.../nthw/supported/nthw_fpga_mod_str_map.c | 2 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 5 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 48 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 205 +
drivers/net/ntnic/ntnic_ethdev.c | 750 ++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 940 +++
drivers/net/ntnic/ntnic_mod_reg.c | 93 +
drivers/net/ntnic/ntnic_mod_reg.h | 233 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 +++
drivers/net/ntnic/ntutil/nt_util.h | 12 +
75 files changed, 23690 insertions(+), 1049 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 01/73] net/ntnic: add API for configuration NT flow dev
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 02/73] net/ntnic: add flow filter API Serhii Iliushyk
` (72 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
This API allows to enable of flow profile for NT SmartNIC
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 30 +++
drivers/net/ntnic/include/flow_api_engine.h | 5 +
drivers/net/ntnic/include/ntos_drv.h | 1 +
.../ntnic/include/stream_binary_flow_api.h | 9 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 221 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 22 ++
drivers/net/ntnic/ntnic_mod_reg.c | 5 +
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++
8 files changed, 307 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 984450afdc..c80906ec50 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -34,6 +34,8 @@ struct flow_eth_dev {
struct flow_nic_dev *ndev;
/* NIC port id */
uint8_t port;
+ /* App assigned port_id - may be DPDK port_id */
+ uint32_t port_id;
/* 0th for exception */
struct flow_queue_id_s rx_queue[FLOW_MAX_QUEUES + 1];
@@ -41,6 +43,9 @@ struct flow_eth_dev {
/* VSWITCH has exceptions sent on queue 0 per design */
int num_queues;
+ /* QSL_HSH index if RSS needed QSL v6+ */
+ int rss_target_id;
+
struct flow_eth_dev *next;
};
@@ -48,6 +53,8 @@ struct flow_eth_dev {
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
uint16_t ports; /* number of in-ports addressable on this NIC */
+ /* flow profile this NIC is initially prepared for */
+ enum flow_eth_dev_profile flow_profile;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
@@ -73,6 +80,14 @@ struct flow_nic_dev {
extern const char *dbg_res_descr[];
+#define flow_nic_set_bit(arr, x) \
+ do { \
+ uint8_t *_temp_arr = (arr); \
+ size_t _temp_x = (x); \
+ _temp_arr[_temp_x / 8] = \
+ (uint8_t)(_temp_arr[_temp_x / 8] | (uint8_t)(1 << (_temp_x % 8))); \
+ } while (0)
+
#define flow_nic_unset_bit(arr, x) \
do { \
size_t _temp_x = (x); \
@@ -85,6 +100,18 @@ extern const char *dbg_res_descr[];
(arr[_temp_x / 8] & (uint8_t)(1 << (_temp_x % 8))); \
})
+#define flow_nic_mark_resource_used(_ndev, res_type, index) \
+ do { \
+ struct flow_nic_dev *_temp_ndev = (_ndev); \
+ typeof(res_type) _temp_res_type = (res_type); \
+ size_t _temp_index = (index); \
+ NT_LOG(DBG, FILTER, "mark resource used: %s idx %zu", \
+ dbg_res_descr[_temp_res_type], _temp_index); \
+ assert(flow_nic_is_bit_set(_temp_ndev->res[_temp_res_type].alloc_bm, \
+ _temp_index) == 0); \
+ flow_nic_set_bit(_temp_ndev->res[_temp_res_type].alloc_bm, _temp_index); \
+ } while (0)
+
#define flow_nic_mark_resource_unused(_ndev, res_type, index) \
do { \
typeof(res_type) _temp_res_type = (res_type); \
@@ -97,6 +124,9 @@ extern const char *dbg_res_descr[];
#define flow_nic_is_resource_used(_ndev, res_type, index) \
(!!flow_nic_is_bit_set((_ndev)->res[res_type].alloc_bm, index))
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment);
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index db5e6fe09d..d025677e25 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -41,6 +41,11 @@ enum res_type_e {
RES_INVALID
};
+/*
+ * Flow NIC offload management
+ */
+#define MAX_OUTPUT_DEST (128)
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index d51d1e3677..8fd577dfe3 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -86,6 +86,7 @@ struct __rte_cache_aligned ntnic_tx_queue {
struct pmd_internals {
const struct rte_pci_device *pci_dev;
+ struct flow_eth_dev *flw_dev;
char name[20];
int n_intf_no;
int lpbk_mode;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 10529b8843..47e5353344 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,11 +12,20 @@
#define FLOW_MAX_QUEUES 128
+/*
+ * Flow eth dev profile determines how the FPGA module resources are
+ * managed and what features are available
+ */
+enum flow_eth_dev_profile {
+ FLOW_ETH_DEV_PROFILE_INLINE = 0,
+};
+
struct flow_queue_id_s {
int id;
int hw_id;
};
struct flow_eth_dev; /* port device */
+struct flow_handle;
#endif /* _STREAM_BINARY_FLOW_API_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34e84559eb..f49aca79c1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_nic_setup.h"
#include "ntnic_mod_reg.h"
+#include "flow_api.h"
#include "flow_filter.h"
const char *dbg_res_descr[] = {
@@ -35,6 +36,24 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Resources
+ */
+
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment)
+{
+ for (unsigned int i = 0; i < ndev->res[res_type].resource_count; i += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, i)) {
+ flow_nic_mark_resource_used(ndev, res_type, i);
+ ndev->res[res_type].ref[i] = 1;
+ return i;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
@@ -55,10 +74,60 @@ int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return !!ndev->res[res_type].ref[index];/* if 0 resource has been freed */
}
+/*
+ * Nic port/adapter lookup
+ */
+
+static struct flow_eth_dev *nic_and_port_to_eth_dev(uint8_t adapter_no, uint8_t port)
+{
+ struct flow_nic_dev *nic_dev = dev_base;
+
+ while (nic_dev) {
+ if (nic_dev->adapter_no == adapter_no)
+ break;
+
+ nic_dev = nic_dev->next;
+ }
+
+ if (!nic_dev)
+ return NULL;
+
+ struct flow_eth_dev *dev = nic_dev->eth_base;
+
+ while (dev) {
+ if (port == dev->port)
+ return dev;
+
+ dev = dev->next;
+ }
+
+ return NULL;
+}
+
+static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
+{
+ struct flow_nic_dev *ndev = dev_base;
+
+ while (ndev) {
+ if (adapter_no == ndev->adapter_no)
+ break;
+
+ ndev = ndev->next;
+ }
+
+ return ndev;
+}
+
/*
* Device Management API
*/
+static void nic_insert_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *dev)
+{
+ dev->next = ndev->eth_base;
+ ndev->eth_base = dev;
+}
+
static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *eth_dev)
{
struct flow_eth_dev *dev = ndev->eth_base, *prev = NULL;
@@ -242,6 +311,154 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
return -1;
}
+/*
+ * adapter_no physical adapter no
+ * port_no local port no
+ * alloc_rx_queues number of rx-queues to allocate for this eth_dev
+ */
+static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no, uint32_t port_id,
+ int alloc_rx_queues, struct flow_queue_id_s queue_ids[],
+ int *rss_target_id, enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+
+ int i;
+ struct flow_eth_dev *eth_dev = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "Get eth-port adapter %i, port %i, port_id %u, rx queues %i, profile %i",
+ adapter_no, port_no, port_id, alloc_rx_queues, flow_profile);
+
+ if (MAX_OUTPUT_DEST < FLOW_MAX_QUEUES) {
+ assert(0);
+ NT_LOG(ERR, FILTER,
+ "ERROR: Internal array for multiple queues too small for API");
+ }
+
+ pthread_mutex_lock(&base_mtx);
+ struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
+
+ if (!ndev) {
+ /* Error - no flow api found on specified adapter */
+ NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
+ adapter_no);
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if (ndev->ports < ((uint16_t)port_no + 1)) {
+ NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
+ NT_LOG(ERR, FILTER,
+ "ERROR: Exceeds supported number of rx queues per eth device");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ /* don't accept multiple eth_dev's on same NIC and same port */
+ eth_dev = nic_and_port_to_eth_dev(adapter_no, port_no);
+
+ if (eth_dev) {
+ NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
+ adapter_no, port_no);
+ pthread_mutex_unlock(&base_mtx);
+ flow_delete_eth_dev(eth_dev);
+ eth_dev = NULL;
+ }
+
+ eth_dev = calloc(1, sizeof(struct flow_eth_dev));
+
+ if (!eth_dev) {
+ NT_LOG(ERR, FILTER, "ERROR: calloc failed");
+ goto err_exit1;
+ }
+
+ pthread_mutex_lock(&ndev->mtx);
+
+ eth_dev->ndev = ndev;
+ eth_dev->port = port_no;
+ eth_dev->port_id = port_id;
+
+ /* Allocate the requested queues in HW for this dev */
+
+ for (i = 0; i < alloc_rx_queues; i++) {
+#ifdef SCATTER_GATHER
+ eth_dev->rx_queue[i] = queue_ids[i];
+#else
+ int queue_id = flow_nic_alloc_resource(ndev, RES_QUEUE, 1);
+
+ if (queue_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: no more free queue IDs in NIC");
+ goto err_exit0;
+ }
+
+ eth_dev->rx_queue[eth_dev->num_queues].id = (uint8_t)queue_id;
+ eth_dev->rx_queue[eth_dev->num_queues].hw_id =
+ ndev->be.iface->alloc_rx_queue(ndev->be.be_dev,
+ eth_dev->rx_queue[eth_dev->num_queues].id);
+
+ if (eth_dev->rx_queue[eth_dev->num_queues].hw_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: could not allocate a new queue");
+ goto err_exit0;
+ }
+
+ if (queue_ids)
+ queue_ids[eth_dev->num_queues] = eth_dev->rx_queue[eth_dev->num_queues];
+#endif
+
+ if (i == 0 && (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE && exception_path)) {
+ /*
+ * Init QSL UNM - unmatched - redirects otherwise discarded
+ * packets in QSL
+ */
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_DEST_QUEUE, eth_dev->port,
+ eth_dev->rx_queue[0].hw_id) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1) < 0)
+ goto err_exit0;
+ }
+
+ eth_dev->num_queues++;
+ }
+
+ eth_dev->rss_target_id = -1;
+
+ *rss_target_id = eth_dev->rss_target_id;
+
+ nic_insert_eth_port_dev(ndev, eth_dev);
+
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+ return eth_dev;
+
+err_exit0:
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+
+err_exit1:
+ if (eth_dev)
+ free(eth_dev);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ NT_LOG(DBG, FILTER, "ERR in %s", __func__);
+ return NULL; /* Error exit */
+}
+
struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_backend_ops *be_if,
void *be_dev)
{
@@ -383,6 +600,10 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
+ /*
+ * Device Management API
+ */
+ .flow_get_eth_dev = flow_get_eth_dev,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bff893ec7a..510c0e5d23 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1355,6 +1355,13 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1378,10 +1385,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
uint32_t n_port_mask = -1; /* All ports enabled by default */
uint32_t nb_rx_queues = 1;
uint32_t nb_tx_queues = 1;
+ uint32_t exception_path = 0;
struct flow_queue_id_s queue_ids[MAX_QUEUES];
int n_phy_ports;
struct port_link_speed pls_mbps[NUM_ADAPTER_PORTS_MAX] = { 0 };
int num_port_speeds = 0;
+ enum flow_eth_dev_profile profile = FLOW_ETH_DEV_PROFILE_INLINE;
+
NT_LOG_DBGX(DBG, NTNIC, "Dev %s PF #%i Init : %02x:%02x:%i", pci_dev->name,
pci_dev->addr.function, pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
@@ -1681,6 +1691,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (flow_filter_ops != NULL) {
+ internals->flw_dev = flow_filter_ops->flow_get_eth_dev(0, n_intf_no,
+ eth_dev->data->port_id, nb_rx_queues, queue_ids,
+ &internals->txq_scg[0].rss_target_id, profile, exception_path);
+
+ if (!internals->flw_dev) {
+ NT_LOG(ERR, NTNIC,
+ "Error creating port. Resource exhaustion in HW");
+ return -1;
+ }
+ }
+
/* connect structs */
internals->p_drv = p_drv;
eth_dev->data->dev_private = internals;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index a03c97801b..ac8afdef6a 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,6 +118,11 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+const struct profile_inline_ops *get_profile_inline_ops(void)
+{
+ return NULL;
+}
+
static const struct flow_filter_ops *flow_filter_ops;
void register_flow_filter_ops(const struct flow_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 5b97b3d8ac..017d15d7bc 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include "flow_api.h"
+#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
#include "nthw_platform_drv.h"
#include "nthw_drv.h"
@@ -223,10 +224,23 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+const struct profile_inline_ops *get_profile_inline_ops(void);
+
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
int adapter_no);
int (*flow_filter_done)(struct flow_nic_dev *dev);
+ /*
+ * Device Management API
+ */
+ struct flow_eth_dev *(*flow_get_eth_dev)(uint8_t adapter_no,
+ uint8_t hw_port_no,
+ uint32_t port_id,
+ int alloc_rx_queues,
+ struct flow_queue_id_s queue_ids[],
+ int *rss_target_id,
+ enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path);
};
void register_flow_filter_ops(const struct flow_filter_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 02/73] net/ntnic: add flow filter API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
` (71 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable flow ops getter
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 13 +++++++
.../ntnic/include/stream_binary_flow_api.h | 2 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 7 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 37 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +++
7 files changed, 80 insertions(+)
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
new file mode 100644
index 0000000000..802e6dcbe1
--- /dev/null
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -0,0 +1,13 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __CREATE_ELEMENTS_H__
+#define __CREATE_ELEMENTS_H__
+
+
+#include "stream_binary_flow_api.h"
+#include <rte_flow.h>
+
+#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 47e5353344..a6244d4082 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,8 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include "rte_flow.h"
+#include "rte_flow_driver.h"
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 3d9566a52e..d272c73c62 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -79,6 +79,7 @@ sources = files(
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
'ntlog/ntlog.c',
+ 'ntnic_filter/ntnic_filter.c',
'ntutil/nt_util.c',
'ntnic_mod_reg.c',
'ntnic_vfio.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 510c0e5d23..a509a8eb51 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1321,6 +1321,12 @@ eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size
}
}
+static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_ops **ops)
+{
+ *ops = get_dev_flow_ops();
+ return 0;
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1349,6 +1355,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
};
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
new file mode 100644
index 0000000000..445139abc9
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -0,0 +1,37 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_flow_driver.h>
+#include "ntnic_mod_reg.h"
+
+static int
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ int res = 0;
+
+ return res;
+}
+
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ const struct rte_flow_item items[] __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct rte_flow *flow = NULL;
+
+ return flow;
+}
+
+static const struct rte_flow_ops dev_flow_ops = {
+ .create = eth_flow_create,
+ .destroy = eth_flow_destroy,
+};
+
+void dev_flow_init(void)
+{
+ register_dev_flow_ops(&dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ac8afdef6a..ad2266116f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -137,3 +137,18 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+
+static const struct rte_flow_ops *dev_flow_ops;
+
+void register_dev_flow_ops(const struct rte_flow_ops *ops)
+{
+ dev_flow_ops = ops;
+}
+
+const struct rte_flow_ops *get_dev_flow_ops(void)
+{
+ if (dev_flow_ops == NULL)
+ dev_flow_init();
+
+ return dev_flow_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 017d15d7bc..457dc58794 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -15,6 +15,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nthw_fpga_rst_nt200a0x.h"
#include "ntnic_virt_queue.h"
+#include "create_elements.h"
/* sg ops section */
struct sg_ops_s {
@@ -243,6 +244,10 @@ struct flow_filter_ops {
uint32_t exception_path);
};
+void register_dev_flow_ops(const struct rte_flow_ops *ops);
+const struct rte_flow_ops *get_dev_flow_ops(void);
+void dev_flow_init(void);
+
void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 03/73] net/ntnic: add minimal create/destroy flow operations
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 02/73] net/ntnic: add flow filter API Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
` (70 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add high level API with describes base create/destroy implementation
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 51 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 227 +++++++++++++++++-
drivers/net/ntnic/ntutil/nt_util.h | 3 +
3 files changed, 274 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 802e6dcbe1..179542d2b2 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -6,8 +6,59 @@
#ifndef __CREATE_ELEMENTS_H__
#define __CREATE_ELEMENTS_H__
+#include "stdint.h"
#include "stream_binary_flow_api.h"
#include <rte_flow.h>
+#define MAX_ELEMENTS 64
+#define MAX_ACTIONS 32
+
+struct cnv_match_s {
+ struct rte_flow_item rte_flow_item[MAX_ELEMENTS];
+};
+
+struct cnv_attr_s {
+ struct cnv_match_s match;
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
+};
+
+struct cnv_action_s {
+ struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_queue queue;
+};
+
+/*
+ * Only needed because it eases the use of statistics through NTAPI
+ * for faster integration into NTAPI version of driver
+ * Therefore, this is only a good idea when running on a temporary NTAPI
+ * The query() functionality must go to flow engine, when moved to Open Source driver
+ */
+
+struct rte_flow {
+ void *flw_hdl;
+ int used;
+
+ uint32_t flow_stat_id;
+
+ uint16_t caller_id;
+};
+
+enum nt_rte_flow_item_type {
+ NT_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
+ NT_RTE_FLOW_ITEM_TYPE_TUNNEL,
+};
+
+extern rte_spinlock_t flow_lock;
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem);
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset);
+
#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 445139abc9..74cf360da0 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,24 +4,237 @@
*/
#include <rte_flow_driver.h>
+#include "nt_util.h"
+#include "create_elements.h"
#include "ntnic_mod_reg.h"
+#include "ntos_system.h"
+
+#define MAX_RTE_FLOWS 8192
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
+{
+ if (error) {
+ error->cause = NULL;
+ error->message = rte_flow_error->message;
+
+ if (rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE ||
+ rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE)
+ error->type = RTE_FLOW_ERROR_TYPE_NONE;
+
+ else
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+
+ return 0;
+}
+
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr)
+{
+ memset(&attribute->attr, 0x0, sizeof(struct rte_flow_attr));
+
+ if (attr) {
+ attribute->attr.group = attr->group;
+ attribute->attr.priority = attr->priority;
+ }
+
+ return 0;
+}
+
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem)
+{
+ int eidx = 0;
+ int iter_idx = 0;
+ int type = -1;
+
+ if (!items) {
+ NT_LOG(ERR, FILTER, "ERROR no items to iterate!");
+ return -1;
+ }
+
+ do {
+ type = items[iter_idx].type;
+
+ if (type < 0) {
+ if ((int)items[iter_idx].type == NT_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ type = NT_RTE_FLOW_ITEM_TYPE_TUNNEL;
+
+ } else {
+ NT_LOG(ERR, FILTER, "ERROR unknown item type received!");
+ return -1;
+ }
+ }
+
+ if (type >= 0) {
+ if (items[iter_idx].last) {
+ /* Ranges are not supported yet */
+ NT_LOG(ERR, FILTER, "ERROR ITEM-RANGE SETUP - NOT SUPPORTED!");
+ return -1;
+ }
+
+ if (eidx == max_elem) {
+ NT_LOG(ERR, FILTER, "ERROR TOO MANY ELEMENTS ENCOUNTERED!");
+ return -1;
+ }
+
+ match->rte_flow_item[eidx].type = type;
+ match->rte_flow_item[eidx].spec = items[iter_idx].spec;
+ match->rte_flow_item[eidx].mask = items[iter_idx].mask;
+
+ eidx++;
+ iter_idx++;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
+ return (type >= 0) ? 0 : -1;
+}
+
+int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ int max_elem __rte_unused,
+ uint32_t queue_offset __rte_unused)
+{
+ int type = -1;
+
+ return (type >= 0) ? 0 : -1;
+}
+
+static inline uint16_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + port + 1;
+}
+
+static int convert_flow(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct cnv_attr_s *attribute,
+ struct cnv_match_s *match,
+ struct cnv_action_s *action,
+ struct rte_flow_error *error)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t queue_offset = 0;
+
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!internals) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Missing eth_dev");
+ return -1;
+ }
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = internals->vpq[0].id;
+ }
+
+ if (create_attr(attribute, attr) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, "Error in attr");
+ return -1;
+ }
+
+ if (create_match_elements(match, items, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in items");
+ return -1;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ if (create_action_elements_inline(action, actions,
+ MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return -1;
+ }
+
+ return 0;
+}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
- struct rte_flow_error *error __rte_unused)
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!flow)
+ return 0;
return res;
}
-static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_attr *attr __rte_unused,
- const struct rte_flow_item items[] __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ struct cnv_attr_s attribute = { 0 };
+ struct cnv_match_s match = { 0 };
+ struct cnv_action_s action = { 0 };
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t flow_stat_id = 0;
+
+ if (convert_flow(eth_dev, attr, items, actions, &attribute, &match, &action, error) < 0)
+ return NULL;
+
+ /* Main application caller_id is port_id shifted above VF ports */
+ attribute.caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ convert_error(error, &flow_error);
+ return (struct rte_flow *)NULL;
+ }
+
struct rte_flow *flow = NULL;
+ rte_spinlock_lock(&flow_lock);
+ int i;
+
+ for (i = 0; i < MAX_RTE_FLOWS; i++) {
+ if (!nt_flows[i].used) {
+ nt_flows[i].flow_stat_id = flow_stat_id;
+
+ if (nt_flows[i].flow_stat_id < NT_MAX_COLOR_FLOW_STATS) {
+ nt_flows[i].used = 1;
+ flow = &nt_flows[i];
+ }
+
+ break;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
return flow;
}
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 64947f5fbf..71ecd6c68c 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -9,6 +9,9 @@
#include <stdint.h>
#include "nt4ga_link.h"
+/* Total max VDPA ports */
+#define MAX_VDPA_PORTS 128UL
+
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 04/73] net/ntnic: add internal flow create/destroy API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (2 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
` (69 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
NT-specific flow filter API for creating/destroying a flow
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 ++++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++++
3 files changed, 116 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index f49aca79c1..d779dc481f 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -117,6 +117,40 @@ static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
return ndev;
}
+/*
+ * Flow API
+ */
+
+static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ uint16_t forced_vlan_vid __rte_unused,
+ uint16_t caller_id __rte_unused,
+ const struct rte_flow_item item[] __rte_unused,
+ const struct rte_flow_action action[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return NULL;
+ }
+
+ return NULL;
+}
+
+static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
+ struct flow_handle *flow __rte_unused, struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return -1;
+}
/*
* Device Management API
@@ -604,6 +638,11 @@ static const struct flow_filter_ops ops = {
* Device Management API
*/
.flow_get_eth_dev = flow_get_eth_dev,
+ /*
+ * NT Flow API
+ */
+ .flow_create = flow_create,
+ .flow_destroy = flow_destroy,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 74cf360da0..b9d723c9dd 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -110,6 +110,13 @@ static inline uint16_t get_caller_id(uint16_t port)
return MAX_VDPA_PORTS + port + 1;
}
+static int is_flow_handle_typecast(struct rte_flow *flow)
+{
+ const void *first_element = &nt_flows[0];
+ const void *last_element = &nt_flows[MAX_RTE_FLOWS - 1];
+ return (void *)flow < first_element || (void *)flow > last_element;
+}
+
static int convert_flow(struct rte_eth_dev *eth_dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -173,9 +180,17 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
- struct rte_flow_error *error)
+eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
@@ -185,6 +200,20 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow
if (!flow)
return 0;
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, (void *)flow, &flow_error);
+ convert_error(error, &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, flow->flw_hdl,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ rte_spinlock_unlock(&flow_lock);
+ }
+
return res;
}
@@ -194,6 +223,13 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -213,8 +249,12 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
attribute.caller_id = get_caller_id(eth_dev->data->port_id);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ void *flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
convert_error(error, &flow_error);
- return (struct rte_flow *)NULL;
+ return (struct rte_flow *)flw_hdl;
}
struct rte_flow *flow = NULL;
@@ -236,6 +276,26 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
rte_spinlock_unlock(&flow_lock);
+ if (flow) {
+ flow->flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ if (!flow->flw_hdl) {
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ flow = NULL;
+ rte_spinlock_unlock(&flow_lock);
+
+ } else {
+ rte_spinlock_lock(&flow_lock);
+ flow->caller_id = attribute.caller_id;
+ rte_spinlock_unlock(&flow_lock);
+ }
+ }
+
return flow;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 457dc58794..ec8c1612d1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -242,6 +242,20 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ /*
+ * NT Flow API
+ */
+ struct flow_handle *(*flow_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 05/73] net/ntnic: add minimal NT flow inline profile
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (3 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
` (68 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The flow profile implements a all flow related operations
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 15 +++++
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
.../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
7 files changed, 174 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index c80906ec50..3bdfdd4f94 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -74,6 +74,21 @@ struct flow_nic_dev {
struct flow_nic_dev *next;
};
+enum flow_nic_err_msg_e {
+ ERR_SUCCESS = 0,
+ ERR_FAILED = 1,
+ ERR_OUTPUT_TOO_MANY = 3,
+ ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_ACTION_UNSUPPORTED = 28,
+ ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_OUTPUT_INVALID = 33,
+ ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_MSG_NO_MSG
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
+
/*
* Resources
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d272c73c62..f5605e81cb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d779dc481f..d0dad8e8f8 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Error handling
+ */
+
+static const struct {
+ const char *message;
+} err_msg[] = {
+ /* 00 */ { "Operation successfully completed" },
+ /* 01 */ { "Operation failed" },
+ /* 29 */ { "Removing flow failed" },
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
+{
+ assert(msg < ERR_MSG_NO_MSG);
+
+ if (error) {
+ error->message = err_msg[msg].message;
+ error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+}
+
/*
* Resources
*/
@@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
return NULL;
}
- return NULL;
+ return profile_inline_ops->flow_create_profile_inline(dev, attr,
+ forced_vlan_vid, caller_id, item, action, error);
}
static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
@@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return -1;
}
- return -1;
+ return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
new file mode 100644
index 0000000000..a6293f5f82
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -0,0 +1,65 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "flow_api_profile_inline.h"
+#include "ntnic_mod_reg.h"
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ return NULL;
+}
+
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(fh);
+
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ return err;
+}
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow) {
+ /* Delete this flow */
+ pthread_mutex_lock(&dev->ndev->mtx);
+ err = flow_destroy_locked_profile_inline(dev, flow, error);
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ }
+
+ return err;
+}
+
+static const struct profile_inline_ops ops = {
+ /*
+ * Flow functionality
+ */
+ .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
+ .flow_create_profile_inline = flow_create_profile_inline,
+ .flow_destroy_profile_inline = flow_destroy_profile_inline,
+};
+
+void profile_inline_init(void)
+{
+ register_profile_inline_ops(&ops);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
new file mode 100644
index 0000000000..a83cc299b4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_H_
+#define _FLOW_API_PROFILE_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+#include "stream_binary_flow_api.h"
+
+/*
+ * Flow functionality
+ */
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+
+#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ad2266116f..593b56bf5b 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+static const struct profile_inline_ops *profile_inline_ops;
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops)
+{
+ profile_inline_ops = ops;
+}
+
const struct profile_inline_ops *get_profile_inline_ops(void)
{
- return NULL;
+ if (profile_inline_ops == NULL)
+ profile_inline_init();
+
+ return profile_inline_ops;
}
static const struct flow_filter_ops *flow_filter_ops;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index ec8c1612d1..d133336fad 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+struct profile_inline_ops {
+ /*
+ * Flow functionality
+ */
+ int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+ struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+};
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops);
const struct profile_inline_ops *get_profile_inline_ops(void);
+void profile_inline_init(void);
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 06/73] net/ntnic: add management API for NT flow profile
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (4 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
` (67 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Management API implements (re)setting of the NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 ++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 60 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 20 +++++++
.../profile_inline/flow_api_profile_inline.h | 8 +++
drivers/net/ntnic/ntnic_mod_reg.h | 8 +++
6 files changed, 102 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 3bdfdd4f94..790b2f6b03 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -55,6 +55,7 @@ struct flow_nic_dev {
uint16_t ports; /* number of in-ports addressable on this NIC */
/* flow profile this NIC is initially prepared for */
enum flow_eth_dev_profile flow_profile;
+ int flow_mgnt_prepared;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index d025677e25..52ff3cb865 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -46,6 +46,11 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+struct flow_handle {
+ struct flow_eth_dev *dev;
+ struct flow_handle *next;
+};
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d0dad8e8f8..6800a8d834 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -10,6 +10,8 @@
#include "flow_api.h"
#include "flow_filter.h"
+#define SCATTER_GATHER
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -210,10 +212,29 @@ static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_de
static void flow_ndev_reset(struct flow_nic_dev *ndev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return;
+ }
+
/* Delete all eth-port devices created on this NIC device */
while (ndev->eth_base)
flow_delete_eth_dev(ndev->eth_base);
+ /* Error check */
+ while (ndev->flow_base) {
+ NT_LOG(ERR, FILTER,
+ "ERROR : Flows still defined but all eth-ports deleted. Flow %p",
+ ndev->flow_base);
+
+ profile_inline_ops->flow_destroy_profile_inline(ndev->flow_base->dev,
+ ndev->flow_base, NULL);
+ }
+
+ profile_inline_ops->done_flow_management_of_ndev_profile_inline(ndev);
+
km_free_ndev_resource_management(&ndev->km_res_handle);
kcc_free_ndev_resource_management(&ndev->kcc_res_handle);
@@ -255,6 +276,13 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
struct flow_nic_dev *ndev = eth_dev->ndev;
if (!ndev) {
@@ -271,6 +299,20 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
/* delete all created flows from this device */
pthread_mutex_lock(&ndev->mtx);
+ struct flow_handle *flow = ndev->flow_base;
+
+ while (flow) {
+ if (flow->dev == eth_dev) {
+ struct flow_handle *flow_next = flow->next;
+ profile_inline_ops->flow_destroy_locked_profile_inline(eth_dev, flow,
+ NULL);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
/*
* remove unmatched queue if setup in QSL
* remove exception queue setting in QSL UNM
@@ -445,6 +487,24 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->port = port_no;
eth_dev->port_id = port_id;
+ /* First time then NIC is initialized */
+ if (!ndev->flow_mgnt_prepared) {
+ ndev->flow_profile = flow_profile;
+
+ /* Initialize modules if needed - recipe 0 is used as no-match and must be setup */
+ if (profile_inline_ops != NULL &&
+ profile_inline_ops->initialize_flow_management_of_ndev_profile_inline(ndev))
+ goto err_exit0;
+
+ } else {
+ /* check if same flow type is requested, otherwise fail */
+ if (ndev->flow_profile != flow_profile) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: Different flow types requested on same NIC device. Not supported.");
+ goto err_exit0;
+ }
+ }
+
/* Allocate the requested queues in HW for this dev */
for (i = 0; i < alloc_rx_queues; i++) {
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a6293f5f82..c9e4008b7e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,20 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+/*
+ * Public functions
+ */
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return -1;
+}
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return 0;
+}
+
struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid,
@@ -51,6 +65,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
}
static const struct profile_inline_ops ops = {
+ /*
+ * Management
+ */
+ .done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
+ .initialize_flow_management_of_ndev_profile_inline =
+ initialize_flow_management_of_ndev_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index a83cc299b4..b87f8542ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,14 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+/*
+ * Management
+ */
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index d133336fad..149c549112 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -226,6 +226,14 @@ const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
struct profile_inline_ops {
+ /*
+ * Management
+ */
+
+ int (*done_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
+ int (*initialize_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (5 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 17:17 ` Stephen Hemminger
2024-10-22 16:54 ` [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
` (66 subsequent siblings)
73 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements functions required for (re)set NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 4 ++
drivers/net/ntnic/include/flow_api_engine.h | 10 ++++
drivers/net/ntnic/meson.build | 4 ++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 55 +++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 52 ++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 19 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 59 +++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 23 ++++++++
.../profile_inline/flow_api_profile_inline.c | 52 ++++++++++++++++
9 files changed, 278 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 790b2f6b03..748da89262 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -61,6 +61,10 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *group_handle;
+ void *hw_db_handle;
+ void *id_table_handle;
+
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 52ff3cb865..2497c31a08 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -6,6 +6,8 @@
#ifndef _FLOW_API_ENGINE_H_
#define _FLOW_API_ENGINE_H_
+#include <stdint.h>
+
/*
* Resource management
*/
@@ -46,6 +48,9 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_CPY_WRITERS_SUPPORTED 8
+
+
struct flow_handle {
struct flow_eth_dev *dev;
struct flow_handle *next;
@@ -55,4 +60,9 @@ void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
+/*
+ * Group management
+ */
+int flow_group_handle_create(void **handle, uint32_t group_count);
+int flow_group_handle_destroy(void **handle);
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f5605e81cb..f7292144ac 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -18,6 +18,7 @@ includes = [
include_directories('nthw/supported'),
include_directories('nthw/model'),
include_directories('nthw/flow_filter'),
+ include_directories('nthw/flow_api'),
include_directories('nim/'),
]
@@ -47,7 +48,10 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/flow_group.c',
+ 'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
+ 'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
new file mode 100644
index 0000000000..a7371f3aad
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "flow_api_engine.h"
+
+#define OWNER_ID_COUNT 256
+#define PORT_COUNT 8
+
+struct group_lookup_entry_s {
+ uint64_t ref_counter;
+ uint32_t *reverse_lookup;
+};
+
+struct group_handle_s {
+ uint32_t group_count;
+
+ uint32_t *translation_table;
+
+ struct group_lookup_entry_s *lookup_entries;
+};
+
+int flow_group_handle_create(void **handle, uint32_t group_count)
+{
+ struct group_handle_s *group_handle;
+
+ *handle = calloc(1, sizeof(struct group_handle_s));
+ group_handle = *handle;
+
+ group_handle->group_count = group_count;
+ group_handle->translation_table =
+ calloc((uint32_t)(group_count * PORT_COUNT * OWNER_ID_COUNT), sizeof(uint32_t));
+ group_handle->lookup_entries = calloc(group_count, sizeof(struct group_lookup_entry_s));
+
+ return *handle != NULL ? 0 : -1;
+}
+
+int flow_group_handle_destroy(void **handle)
+{
+ if (*handle) {
+ struct group_handle_s *group_handle = (struct group_handle_s *)*handle;
+
+ free(group_handle->translation_table);
+ free(group_handle->lookup_entries);
+
+ free(*handle);
+ *handle = NULL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
new file mode 100644
index 0000000000..9b46848e59
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "flow_id_table.h"
+
+#define NTNIC_ARRAY_BITS 14
+#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+
+struct ntnic_id_table_element {
+ union flm_handles handle;
+ uint8_t caller_id;
+ uint8_t type;
+};
+
+struct ntnic_id_table_data {
+ struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
+ pthread_mutex_t mtx;
+
+ uint32_t next_id;
+
+ uint32_t free_head;
+ uint32_t free_tail;
+ uint32_t free_count;
+};
+
+void *ntnic_id_table_create(void)
+{
+ struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
+
+ pthread_mutex_init(&handle->mtx, NULL);
+ handle->next_id = 1;
+
+ return handle;
+}
+
+void ntnic_id_table_destroy(void *id_table)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
+ free(handle->arrays[i]);
+
+ pthread_mutex_destroy(&handle->mtx);
+
+ free(id_table);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
new file mode 100644
index 0000000000..13455f1165
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLOW_ID_TABLE_H_
+#define _FLOW_ID_TABLE_H_
+
+#include <stdint.h>
+
+union flm_handles {
+ uint64_t idx;
+ void *p;
+};
+
+void *ntnic_id_table_create(void);
+void ntnic_id_table_destroy(void *id_table);
+
+#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
new file mode 100644
index 0000000000..5fda11183c
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+
+#include "flow_api_hw_db_inline.h"
+
+/******************************************************************************/
+/* Handle */
+/******************************************************************************/
+
+struct hw_db_inline_resource_db {
+ /* Actions */
+ struct hw_db_inline_resource_db_cot {
+ struct hw_db_inline_cot_data data;
+ int ref;
+ } *cot;
+
+ uint32_t nb_cot;
+
+ /* Hardware */
+
+ struct hw_db_inline_resource_db_cfn {
+ uint64_t priority;
+ int cfn_hw;
+ int ref;
+ } *cfn;
+};
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
+{
+ /* Note: calloc is required for functionality in the hw_db_inline_destroy() */
+ struct hw_db_inline_resource_db *db = calloc(1, sizeof(struct hw_db_inline_resource_db));
+
+ if (db == NULL)
+ return -1;
+
+ db->nb_cot = ndev->be.cat.nb_cat_funcs;
+ db->cot = calloc(db->nb_cot, sizeof(struct hw_db_inline_resource_db_cot));
+
+ if (db->cot == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ *db_handle = db;
+ return 0;
+}
+
+void hw_db_inline_destroy(void *db_handle)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ free(db->cot);
+
+ free(db->cfn);
+
+ free(db);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
new file mode 100644
index 0000000000..23caf73cf3
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_HW_DB_INLINE_H_
+#define _FLOW_API_HW_DB_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+
+struct hw_db_inline_cot_data {
+ uint32_t matcher_color_contrib : 4;
+ uint32_t frag_rcp : 4;
+ uint32_t padding : 24;
+};
+
+/**/
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
+void hw_db_inline_destroy(void *db_handle);
+
+#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c9e4008b7e..986196b408 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,6 +4,9 @@
*/
#include "ntlog.h"
+#include "flow_api_engine.h"
+#include "flow_api_hw_db_inline.h"
+#include "flow_id_table.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
@@ -14,11 +17,60 @@
int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+ if (!ndev->flow_mgnt_prepared) {
+ /* Check static arrays are big enough */
+ assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+
+ ndev->id_table_handle = ntnic_id_table_create();
+
+ if (ndev->id_table_handle == NULL)
+ goto err_exit0;
+
+ if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
+ goto err_exit0;
+
+ if (hw_db_inline_create(ndev, &ndev->hw_db_handle))
+ goto err_exit0;
+
+ ndev->flow_mgnt_prepared = 1;
+ }
+
+ return 0;
+
+err_exit0:
+ done_flow_management_of_ndev_profile_inline(ndev);
return -1;
}
int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ if (ndev->flow_mgnt_prepared) {
+ flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+
+ flow_group_handle_destroy(&ndev->group_handle);
+ ntnic_id_table_destroy(ndev->id_table_handle);
+
+ flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+
+ hw_mod_tpe_reset(&ndev->be);
+ flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
+ flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+
+ hw_db_inline_destroy(ndev->hw_db_handle);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ ndev->flow_mgnt_prepared = 0;
+ }
+
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (6 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 17:20 ` Stephen Hemminger
2024-10-22 16:54 ` [PATCH v2 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
` (65 subsequent siblings)
73 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements flow create/destroy functions with minimal capabilities
item any
action port id
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 6 +
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 105 +++
.../ntnic/include/stream_binary_flow_api.h | 4 +
drivers/net/ntnic/meson.build | 2 +
drivers/net/ntnic/nthw/flow_api/flow_group.c | 44 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 79 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 4 +
.../flow_api/profile_inline/flm_lrn_queue.c | 28 +
.../flow_api/profile_inline/flm_lrn_queue.h | 14 +
.../profile_inline/flow_api_hw_db_inline.c | 93 +++
.../profile_inline/flow_api_hw_db_inline.h | 64 ++
.../profile_inline/flow_api_profile_inline.c | 657 ++++++++++++++++++
13 files changed, 1103 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b9b87bdfe..1c653fd5a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,3 +12,9 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
Linux = Y
x86-64 = Y
+
+[rte_flow items]
+any = Y
+
+[rte_flow actions]
+port_id = Y
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 748da89262..667dad6d5f 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -68,6 +68,9 @@ struct flow_nic_dev {
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
+ /* linked list of all FLM flows created on this NIC */
+ struct flow_handle *flow_base_flm;
+ pthread_mutex_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 2497c31a08..b8da5eafba 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -7,6 +7,10 @@
#define _FLOW_API_ENGINE_H_
#include <stdint.h>
+#include <stdatomic.h>
+
+#include "hw_mod_backend.h"
+#include "stream_binary_flow_api.h"
/*
* Resource management
@@ -50,10 +54,107 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+enum flow_port_type_e {
+ PORT_NONE, /* not defined or drop */
+ PORT_INTERNAL, /* no queues attached */
+ PORT_PHY, /* MAC phy output queue */
+ PORT_VIRT, /* Memory queues to Host */
+};
+
+struct output_s {
+ uint32_t owning_port_id;/* the port who owns this output destination */
+ enum flow_port_type_e type;
+ int id; /* depending on port type: queue ID or physical port id or not used */
+ int active; /* activated */
+};
+
+struct nic_flow_def {
+ /*
+ * Frame Decoder match info collected
+ */
+ int l2_prot;
+ int l3_prot;
+ int l4_prot;
+ int tunnel_prot;
+ int tunnel_l3_prot;
+ int tunnel_l4_prot;
+ int vlans;
+ int fragmentation;
+ int ip_prot;
+ int tunnel_ip_prot;
+ /*
+ * Additional meta data for various functions
+ */
+ int in_port_override;
+ int non_empty; /* default value is -1; value 1 means flow actions update */
+ struct output_s dst_id[MAX_OUTPUT_DEST];/* define the output to use */
+ /* total number of available queues defined for all outputs - i.e. number of dst_id's */
+ int dst_num_avail;
+
+ /*
+ * Mark or Action info collection
+ */
+ uint32_t mark;
+
+ uint32_t jump_to_group;
+
+ int full_offload;
+};
+
+enum flow_handle_type {
+ FLOW_HANDLE_TYPE_FLOW,
+ FLOW_HANDLE_TYPE_FLM,
+};
struct flow_handle {
+ enum flow_handle_type type;
+ uint32_t flm_id;
+ uint16_t caller_id;
+ uint16_t learn_ignored;
+
struct flow_eth_dev *dev;
struct flow_handle *next;
+ struct flow_handle *prev;
+
+ void *user_data;
+
+ union {
+ struct {
+ /*
+ * 1st step conversion and validation of flow
+ * verified and converted flow match + actions structure
+ */
+ struct nic_flow_def *fd;
+ /*
+ * 2nd step NIC HW resource allocation and configuration
+ * NIC resource management structures
+ */
+ struct {
+ uint32_t db_idx_counter;
+ uint32_t db_idxs[RES_COUNT];
+ };
+ uint32_t port_id; /* MAC port ID or override of virtual in_port */
+ };
+
+ struct {
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_data[10];
+ uint8_t flm_prot;
+ uint8_t flm_kid;
+ uint8_t flm_prio;
+ uint8_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint32_t flm_nat_ipv4;
+ uint16_t flm_nat_port;
+ uint8_t flm_dscp;
+ uint32_t flm_teid;
+ uint8_t flm_rqi;
+ uint8_t flm_qfi;
+ };
+ };
};
void km_free_ndev_resource_management(void **handle);
@@ -65,4 +166,8 @@ void kcc_free_ndev_resource_management(void **handle);
*/
int flow_group_handle_create(void **handle, uint32_t group_count);
int flow_group_handle_destroy(void **handle);
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out);
+
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index a6244d4082..d878b848c2 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -8,6 +8,10 @@
#include "rte_flow.h"
#include "rte_flow_driver.h"
+
+/* Max RSS hash key length in bytes */
+#define MAX_RSS_KEY_LEN 40
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f7292144ac..e1fef37ccb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -50,6 +50,8 @@ sources = files(
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
+ 'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
index a7371f3aad..f76986b178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_group.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -53,3 +53,47 @@ int flow_group_handle_destroy(void **handle)
return 0;
}
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out)
+{
+ struct group_handle_s *group_handle = (struct group_handle_s *)handle;
+ uint32_t *table_ptr;
+ uint32_t lookup;
+
+ if (group_handle == NULL || group_in >= group_handle->group_count || port_id >= PORT_COUNT)
+ return -1;
+
+ /* Don't translate group 0 */
+ if (group_in == 0) {
+ *group_out = 0;
+ return 0;
+ }
+
+ table_ptr = &group_handle->translation_table[port_id * OWNER_ID_COUNT * PORT_COUNT +
+ owner_id * OWNER_ID_COUNT + group_in];
+ lookup = *table_ptr;
+
+ if (lookup == 0) {
+ for (lookup = 1; lookup < group_handle->group_count &&
+ group_handle->lookup_entries[lookup].ref_counter > 0;
+ ++lookup)
+ ;
+
+ if (lookup < group_handle->group_count) {
+ group_handle->lookup_entries[lookup].reverse_lookup = table_ptr;
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+
+ *table_ptr = lookup;
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+ }
+
+ *group_out = lookup;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 9b46848e59..5635ac4524 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -4,6 +4,7 @@
*/
#include <pthread.h>
+#include <stdint.h>
#include <stdlib.h>
#include <string.h>
@@ -11,6 +12,10 @@
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+#define NTNIC_ARRAY_MASK (NTNIC_ARRAY_SIZE - 1)
+#define NTNIC_MAX_ID (NTNIC_ARRAY_SIZE * NTNIC_ARRAY_SIZE)
+#define NTNIC_MAX_ID_MASK (NTNIC_MAX_ID - 1)
+#define NTNIC_MIN_FREE 1000
struct ntnic_id_table_element {
union flm_handles handle;
@@ -29,6 +34,36 @@ struct ntnic_id_table_data {
uint32_t free_count;
};
+static inline struct ntnic_id_table_element *
+ntnic_id_table_array_find_element(struct ntnic_id_table_data *handle, uint32_t id)
+{
+ uint32_t idx_d1 = id & NTNIC_ARRAY_MASK;
+ uint32_t idx_d2 = (id >> NTNIC_ARRAY_BITS) & NTNIC_ARRAY_MASK;
+
+ if (handle->arrays[idx_d2] == NULL) {
+ handle->arrays[idx_d2] =
+ calloc(NTNIC_ARRAY_SIZE, sizeof(struct ntnic_id_table_element));
+ }
+
+ return &handle->arrays[idx_d2][idx_d1];
+}
+
+static inline uint32_t ntnic_id_table_array_pop_free_id(struct ntnic_id_table_data *handle)
+{
+ uint32_t id = 0;
+
+ if (handle->free_count > NTNIC_MIN_FREE) {
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_tail);
+ id = handle->free_tail;
+
+ handle->free_tail = element->handle.idx & NTNIC_MAX_ID_MASK;
+ handle->free_count -= 1;
+ }
+
+ return id;
+}
+
void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
@@ -50,3 +85,47 @@ void ntnic_id_table_destroy(void *id_table)
free(id_table);
}
+
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
+
+ if (new_id == 0)
+ new_id = handle->next_id++;
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, new_id);
+ element->caller_id = caller_id;
+ element->type = type;
+ memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+
+ return new_id;
+}
+
+void ntnic_id_table_free_id(void *id_table, uint32_t id)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *current_element =
+ ntnic_id_table_array_find_element(handle, id);
+ memset(current_element, 0, sizeof(struct ntnic_id_table_element));
+
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_head);
+ element->handle.idx = id;
+ handle->free_head = id;
+ handle->free_count += 1;
+
+ if (handle->free_tail == 0)
+ handle->free_tail = handle->free_head;
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index 13455f1165..e190fe4a11 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -16,4 +16,8 @@ union flm_handles {
void *ntnic_id_table_create(void);
void ntnic_id_table_destroy(void *id_table);
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type);
+void ntnic_id_table_free_id(void *id_table, uint32_t id);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
new file mode 100644
index 0000000000..ad7efafe08
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+
+#include "hw_mod_flm_v25.h"
+
+#include "flm_lrn_queue.h"
+
+#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ unsigned int n = rte_ring_enqueue_zc_burst_elem_start(q, ELEM_SIZE, 1, &zcd, NULL);
+ return (n == 0) ? NULL : zcd.ptr1;
+}
+
+void flm_lrn_queue_release_write_buffer(void *q)
+{
+ rte_ring_enqueue_zc_elem_finish(q, 1);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
new file mode 100644
index 0000000000..8cee0c8e78
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_LRN_QUEUE_H_
+#define _FLM_LRN_QUEUE_H_
+
+#include <stdint.h>
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q);
+void flm_lrn_queue_release_write_buffer(void *q);
+
+#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5fda11183c..4ea9387c80 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -3,7 +3,11 @@
*/
+#include "hw_mod_backend.h"
+#include "flow_api_engine.h"
+
#include "flow_api_hw_db_inline.h"
+#include "rte_common.h"
/******************************************************************************/
/* Handle */
@@ -57,3 +61,92 @@ void hw_db_inline_destroy(void *db_handle)
free(db);
}
+
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size)
+{
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_COT:
+ hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/******************************************************************************/
+/* COT */
+/******************************************************************************/
+
+static int hw_db_inline_cot_compare(const struct hw_db_inline_cot_data *data1,
+ const struct hw_db_inline_cot_data *data2)
+{
+ return data1->matcher_color_contrib == data2->matcher_color_contrib &&
+ data1->frag_rcp == data2->frag_rcp;
+}
+
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cot_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_COT;
+
+ for (uint32_t i = 1; i < db->nb_cot; ++i) {
+ int ref = db->cot[i].ref;
+
+ if (ref > 0 && hw_db_inline_cot_compare(data, &db->cot[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cot_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cot[idx.ids].ref = 1;
+ memcpy(&db->cot[idx.ids].data, data, sizeof(struct hw_db_inline_cot_data));
+
+ return idx;
+}
+
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cot[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cot[idx.ids].ref -= 1;
+
+ if (db->cot[idx.ids].ref <= 0) {
+ memset(&db->cot[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cot_data));
+ db->cot[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 23caf73cf3..0116af015d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -9,15 +9,79 @@
#include "flow_api.h"
+#define HW_DB_INLINE_MAX_QST_PER_QSL 128
+#define HW_DB_INLINE_MAX_ENCAP_SIZE 128
+
+#define HW_DB_IDX \
+ union { \
+ struct { \
+ uint32_t id1 : 8; \
+ uint32_t id2 : 8; \
+ uint32_t id3 : 8; \
+ uint32_t type : 7; \
+ uint32_t error : 1; \
+ }; \
+ struct { \
+ uint32_t ids : 24; \
+ }; \
+ uint32_t raw; \
+ }
+
+/* Strongly typed int types */
+struct hw_db_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_cot_idx {
+ HW_DB_IDX;
+};
+
+enum hw_db_idx_type {
+ HW_DB_IDX_TYPE_NONE = 0,
+ HW_DB_IDX_TYPE_COT,
+};
+
+/* Functionality data types */
+struct hw_db_inline_qsl_data {
+ uint32_t discard : 1;
+ uint32_t drop : 1;
+ uint32_t table_size : 7;
+ uint32_t retransmit : 1;
+ uint32_t padding : 22;
+
+ struct {
+ uint16_t queue : 7;
+ uint16_t queue_en : 1;
+ uint16_t tx_port : 3;
+ uint16_t tx_port_en : 1;
+ uint16_t padding : 4;
+ } table[HW_DB_INLINE_MAX_QST_PER_QSL];
+};
+
struct hw_db_inline_cot_data {
uint32_t matcher_color_contrib : 4;
uint32_t frag_rcp : 4;
uint32_t padding : 24;
};
+struct hw_db_inline_hsh_data {
+ uint32_t func;
+ uint64_t hash_mask;
+ uint8_t key[MAX_RSS_KEY_LEN];
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
void hw_db_inline_destroy(void *db_handle);
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size);
+
+/**/
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data);
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 986196b408..7f9869a511 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,12 +4,545 @@
*/
#include "ntlog.h"
+#include "nt_util.h"
+
+#include "hw_mod_backend.h"
+#include "flm_lrn_queue.h"
+#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
#include "flow_id_table.h"
+#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_common.h>
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
+static void *flm_lrn_queue_arr;
+
+struct flm_flow_key_def_s {
+ union {
+ struct {
+ uint64_t qw0_dyn : 7;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 7;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 7;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 7;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_proto : 1;
+ uint64_t inner_proto : 1;
+ uint64_t pad : 2;
+ };
+ uint64_t data;
+ };
+ uint32_t mask[10];
+};
+
+/*
+ * Flow Matcher functionality
+ */
+static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
+{
+ struct flow_eth_dev *dev = ndev->eth_base;
+
+ while (dev) {
+ if (dev->port_id == port_id)
+ return dev->port;
+
+ dev = dev->next;
+ }
+
+ return UINT8_MAX;
+}
+
+static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base)
+ ndev->flow_base->prev = fh;
+
+ fh->next = ndev->flow_base;
+ fh->prev = NULL;
+ ndev->flow_base = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ struct flow_handle *next = fh->next;
+ struct flow_handle *prev = fh->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base == fh) {
+ ndev->flow_base = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base_flm)
+ ndev->flow_base_flm->prev = fh;
+
+ fh->next = ndev->flow_base_flm;
+ fh->prev = NULL;
+ ndev->flow_base_flm = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
+{
+ struct flow_handle *next = fh_flm->next;
+ struct flow_handle *prev = fh_flm->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base_flm = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base_flm == fh_flm) {
+ ndev->flow_base_flm = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
+{
+ if (fd) {
+ fd->full_offload = -1;
+ fd->in_port_override = -1;
+ fd->mark = UINT32_MAX;
+ fd->jump_to_group = UINT32_MAX;
+
+ fd->l2_prot = -1;
+ fd->l3_prot = -1;
+ fd->l4_prot = -1;
+ fd->vlans = 0;
+ fd->tunnel_prot = -1;
+ fd->tunnel_l3_prot = -1;
+ fd->tunnel_l4_prot = -1;
+ fd->fragmentation = -1;
+ fd->ip_prot = -1;
+ fd->tunnel_ip_prot = -1;
+
+ fd->non_empty = -1;
+ }
+
+ return fd;
+}
+
+static inline struct nic_flow_def *allocate_nic_flow_def(void)
+{
+ return prepare_nic_flow_def(calloc(1, sizeof(struct nic_flow_def)));
+}
+
+static bool fd_has_empty_pattern(const struct nic_flow_def *fd)
+{
+ return fd && fd->vlans == 0 && fd->l2_prot < 0 && fd->l3_prot < 0 && fd->l4_prot < 0 &&
+ fd->tunnel_prot < 0 && fd->tunnel_l3_prot < 0 && fd->tunnel_l4_prot < 0 &&
+ fd->ip_prot < 0 && fd->tunnel_ip_prot < 0 && fd->non_empty < 0;
+}
+
+static inline const void *memcpy_mask_if(void *dest, const void *src, const void *mask,
+ size_t count)
+{
+ if (mask == NULL)
+ return src;
+
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+ const unsigned char *mask_ptr = (const unsigned char *)mask;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] = src_ptr[i] & mask_ptr[i];
+
+ return dest;
+}
+
+static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLM)
+ return -1;
+
+ if (flm_op == NT_FLM_OP_LEARN) {
+ union flm_handles flm_h;
+ flm_h.p = fh;
+ fh->flm_id = ntnic_id_table_get_id(fh->dev->ndev->id_table_handle, flm_h,
+ fh->caller_id, 1);
+ }
+
+ uint32_t flm_id = fh->flm_id;
+
+ if (flm_op == NT_FLM_OP_UNLEARN) {
+ ntnic_id_table_free_id(fh->dev->ndev->id_table_handle, flm_id);
+
+ if (fh->learn_ignored == 1)
+ return 0;
+ }
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->id = flm_id;
+
+ learn_record->qw0[0] = fh->flm_data[9];
+ learn_record->qw0[1] = fh->flm_data[8];
+ learn_record->qw0[2] = fh->flm_data[7];
+ learn_record->qw0[3] = fh->flm_data[6];
+ learn_record->qw4[0] = fh->flm_data[5];
+ learn_record->qw4[1] = fh->flm_data[4];
+ learn_record->qw4[2] = fh->flm_data[3];
+ learn_record->qw4[3] = fh->flm_data[2];
+ learn_record->sw8 = fh->flm_data[1];
+ learn_record->sw9 = fh->flm_data[0];
+ learn_record->prot = fh->flm_prot;
+
+ /* Last non-zero mtr is used for statistics */
+ uint8_t mbrs = 0;
+
+ learn_record->vol_idx = mbrs;
+
+ learn_record->nat_ip = fh->flm_nat_ipv4;
+ learn_record->nat_port = fh->flm_nat_port;
+ learn_record->nat_en = fh->flm_nat_ipv4 || fh->flm_nat_port ? 1 : 0;
+
+ learn_record->dscp = fh->flm_dscp;
+ learn_record->teid = fh->flm_teid;
+ learn_record->qfi = fh->flm_qfi;
+ learn_record->rqi = fh->flm_rqi;
+ /* Lower 10 bits used for RPL EXT PTR */
+ learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+
+ learn_record->ent = 0;
+ learn_record->op = flm_op & 0xf;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->prio = fh->flm_prio & 0x3;
+ learn_record->ft = fh->flm_ft;
+ learn_record->kid = fh->flm_kid;
+ learn_record->eor = 1;
+ learn_record->scrub_prof = 0;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+ return 0;
+}
+
+/*
+ * This function must be callable without locking any mutexes
+ */
+static int interpret_flow_actions(const struct flow_eth_dev *dev,
+ const struct rte_flow_action action[],
+ const struct rte_flow_action *action_mask,
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint32_t *num_dest_port,
+ uint32_t *num_queues)
+{
+ unsigned int encap_decap_order = 0;
+
+ *num_dest_port = 0;
+ *num_queues = 0;
+
+ if (action == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow actions missing");
+ return -1;
+ }
+
+ /*
+ * Gather flow match + actions and convert into internal flow definition structure (struct
+ * nic_flow_def_s) This is the 1st step in the flow creation - validate, convert and
+ * prepare
+ */
+ for (int aidx = 0; action[aidx].type != RTE_FLOW_ACTION_TYPE_END; ++aidx) {
+ switch (action[aidx].type) {
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_PORT_ID", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_port_id port_id_tmp;
+ const struct rte_flow_action_port_id *port_id =
+ memcpy_mask_if(&port_id_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_port_id));
+
+ if (*num_dest_port > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple port_id actions for one flow is not supported");
+ flow_nic_set_error(ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED,
+ error);
+ return -1;
+ }
+
+ uint8_t port = get_port_from_port_id(dev->ndev, port_id->id);
+
+ if (fd->dst_num_avail == MAX_OUTPUT_DEST) {
+ NT_LOG(ERR, FILTER, "Too many output destinations");
+ flow_nic_set_error(ERR_OUTPUT_TOO_MANY, error);
+ return -1;
+ }
+
+ if (port >= dev->ndev->be.num_phy_ports) {
+ NT_LOG(ERR, FILTER, "Phy port out of range");
+ flow_nic_set_error(ERR_OUTPUT_INVALID, error);
+ return -1;
+ }
+
+ /* New destination port to add */
+ fd->dst_id[fd->dst_num_avail].owning_port_id = port_id->id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_PHY;
+ fd->dst_id[fd->dst_num_avail].id = (int)port;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ if (fd->full_offload < 0)
+ fd->full_offload = 1;
+
+ *num_dest_port += 1;
+
+ NT_LOG(DBG, FILTER, "Phy port ID: %i", (int)port);
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
+ action[aidx].type);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+ }
+
+ if (!(encap_decap_order == 0 || encap_decap_order == 2)) {
+ NT_LOG(ERR, FILTER, "Invalid encap/decap actions");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int interpret_flow_elements(const struct flow_eth_dev *dev,
+ const struct rte_flow_item elem[],
+ struct nic_flow_def *fd __rte_unused,
+ struct rte_flow_error *error,
+ uint16_t implicit_vlan_vid __rte_unused,
+ uint32_t *in_port_id,
+ uint32_t *packet_data,
+ uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ *in_port_id = UINT32_MAX;
+
+ memset(packet_data, 0x0, sizeof(uint32_t) * 10);
+ memset(packet_mask, 0x0, sizeof(uint32_t) * 10);
+ memset(key_def, 0x0, sizeof(struct flm_flow_key_def_s));
+
+ if (elem == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow items missing");
+ return -1;
+ }
+
+ int qw_reserved_mac = 0;
+ int qw_reserved_ipv6 = 0;
+
+ int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
+
+ if (qw_free < 0) {
+ NT_LOG(ERR, FILTER, "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ANY:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
+ (int)elem[eidx].type);
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
+ uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
+ uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
+ uint32_t priority __rte_unused)
+{
+ struct nic_flow_def *fd;
+ struct flow_handle fh_copy;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLOW)
+ return -1;
+
+ memcpy(&fh_copy, fh, sizeof(struct flow_handle));
+ memset(fh, 0x0, sizeof(struct flow_handle));
+ fd = fh_copy.fd;
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->caller_id = fh_copy.caller_id;
+ fh->dev = fh_copy.dev;
+ fh->next = fh_copy.next;
+ fh->prev = fh_copy.prev;
+ fh->user_data = fh_copy.user_data;
+
+ fh->flm_db_idx_counter = fh_copy.db_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+
+ free(fd);
+
+ return 0;
+}
+
+static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
+ const struct nic_flow_def *fd __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ uint32_t group __rte_unused,
+ uint32_t local_idxs[] __rte_unused,
+ uint32_t *local_idx_counter __rte_unused,
+ uint16_t *flm_rpl_ext_ptr __rte_unused,
+ uint32_t *flm_ft __rte_unused,
+ uint32_t *flm_scrub __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ return 0;
+}
+
+static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct nic_flow_def *fd,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
+ struct rte_flow_error *error, uint32_t port_id,
+ uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ struct flm_flow_key_def_s *key_def __rte_unused)
+{
+ struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLOW;
+ fh->port_id = port_id;
+ fh->dev = dev;
+ fh->fd = fd;
+ fh->caller_id = caller_id;
+
+ struct hw_db_inline_qsl_data qsl_data;
+
+ struct hw_db_inline_hsh_data hsh_data;
+
+ if (attr->group > 0 && fd_has_empty_pattern(fd)) {
+ /*
+ * Default flow for group 1..32
+ */
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, NULL, NULL, NULL, error)) {
+ goto error_out;
+ }
+
+ nic_insert_flow(dev->ndev, fh);
+
+ } else if (attr->group > 0) {
+ /*
+ * Flow for group 1..32
+ */
+
+ /* Setup Actions */
+ uint16_t flm_rpl_ext_ptr = 0;
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, &flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Program flow */
+ convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ flm_scrub, attr->priority & 0x3);
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ } else {
+ /*
+ * Flow for group 0
+ */
+ nic_insert_flow(dev->ndev, fh);
+ }
+
+ return fh;
+
+error_out:
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ } else {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ }
+
+ free(fh);
+
+ return NULL;
+}
/*
* Public functions
@@ -82,6 +615,92 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_action action[],
struct rte_flow_error *error)
{
+ struct flow_handle *fh = NULL;
+ int res;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t num_dest_port;
+ uint32_t num_queues;
+
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct rte_flow_attr attr_local;
+ memcpy(&attr_local, attr, sizeof(struct rte_flow_attr));
+ uint16_t forced_vlan_vid_local = forced_vlan_vid;
+ uint16_t caller_id_local = caller_id;
+
+ if (attr_local.group > 0)
+ forced_vlan_vid_local = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL)
+ goto err_exit;
+
+ res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res)
+ goto err_exit;
+
+ res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
+ packet_data, packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ fd->jump_to_group, &fd->jump_to_group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (attr_local.group > 0 &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ attr_local.group, &attr_local.group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ /* Create and flush filter to NIC */
+ fh = create_flow_filter(dev, fd, &attr_local, forced_vlan_vid_local,
+ caller_id_local, error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ if (!fh)
+ goto err_exit;
+
+ NT_LOG(DBG, FILTER, "New FlOW: fh (flow handle) %p, fd (flow definition) %p", fh, fd);
+ NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
+ dev, dev->ndev->adapter_no, dev->port, fh, fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return fh;
+
+err_exit:
+
+ if (fh)
+ flow_destroy_locked_profile_inline(dev, fh, NULL);
+
+ else
+ free(fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
}
@@ -96,6 +715,44 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
flow_nic_set_error(ERR_SUCCESS, error);
+ /* take flow out of ndev list - may not have been put there yet */
+ if (fh->type == FLOW_HANDLE_TYPE_FLM)
+ nic_remove_flow_flm(dev->ndev, fh);
+
+ else
+ nic_remove_flow(dev->ndev, fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ flm_flow_programming(fh, NT_FLM_OP_UNLEARN);
+
+ } else {
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ free(fh->fd);
+ }
+
+ if (err) {
+ NT_LOG(ERR, FILTER, "FAILED removing flow: %p", fh);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ }
+
+ free(fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
return err;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 09/73] net/ntnic: add infrastructure for for flow actions and items
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (7 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 10/73] net/ntnic: add action queue Serhii Iliushyk
` (64 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add etities(utilities, structures, etc) required for flow API
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/flow_api.h | 34 ++++++++
drivers/net/ntnic/include/flow_api_engine.h | 46 +++++++++++
drivers/net/ntnic/include/hw_mod_backend.h | 33 ++++++++
drivers/net/ntnic/nthw/flow_api/flow_km.c | 81 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 68 +++++++++++++++-
5 files changed, 258 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 667dad6d5f..7f031ccda8 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -85,13 +85,47 @@ struct flow_nic_dev {
enum flow_nic_err_msg_e {
ERR_SUCCESS = 0,
ERR_FAILED = 1,
+ ERR_MEMORY = 2,
ERR_OUTPUT_TOO_MANY = 3,
+ ERR_RSS_TOO_MANY_QUEUES = 4,
+ ERR_VLAN_TYPE_NOT_SUPPORTED = 5,
+ ERR_VXLAN_HEADER_NOT_ACCEPTED = 6,
+ ERR_VXLAN_POP_INVALID_RECIRC_PORT = 7,
+ ERR_VXLAN_POP_FAILED_CREATING_VTEP = 8,
+ ERR_MATCH_VLAN_TOO_MANY = 9,
+ ERR_MATCH_INVALID_IPV6_HDR = 10,
+ ERR_MATCH_TOO_MANY_TUNNEL_PORTS = 11,
ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_FAILED_BY_HW_LIMITS = 13,
ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_MATCH_FAILED_TOO_COMPLEX = 15,
+ ERR_ACTION_REPLICATION_FAILED = 16,
+ ERR_ACTION_OUTPUT_RESOURCE_EXHAUSTION = 17,
+ ERR_ACTION_TUNNEL_HEADER_PUSH_OUTPUT_LIMIT = 18,
+ ERR_ACTION_INLINE_MOD_RESOURCE_EXHAUSTION = 19,
+ ERR_ACTION_RETRANSMIT_RESOURCE_EXHAUSTION = 20,
+ ERR_ACTION_FLOW_COUNTER_EXHAUSTION = 21,
+ ERR_ACTION_INTERNAL_RESOURCE_EXHAUSTION = 22,
+ ERR_INTERNAL_QSL_COMPARE_FAILED = 23,
+ ERR_INTERNAL_CAT_FUNC_REUSE_FAILED = 24,
+ ERR_MATCH_ENTROPHY_FAILED = 25,
+ ERR_MATCH_CAM_EXHAUSTED = 26,
+ ERR_INTERNAL_VIRTUAL_PORT_CREATION_FAILED = 27,
ERR_ACTION_UNSUPPORTED = 28,
ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_ACTION_NO_OUTPUT_DEFINED_USE_DEFAULT = 30,
+ ERR_ACTION_NO_OUTPUT_QUEUE_FOUND = 31,
+ ERR_MATCH_UNSUPPORTED_ETHER_TYPE = 32,
ERR_OUTPUT_INVALID = 33,
+ ERR_MATCH_PARTIAL_OFFLOAD_NOT_SUPPORTED = 34,
+ ERR_MATCH_CAT_CAM_EXHAUSTED = 35,
+ ERR_MATCH_KCC_KEY_CLASH = 36,
+ ERR_MATCH_CAT_CAM_FAILED = 37,
+ ERR_PARTIAL_FLOW_MARK_TOO_BIG = 38,
+ ERR_FLOW_PRIORITY_VALUE_INVALID = 39,
ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_RSS_TOO_LONG_KEY = 41,
+ ERR_ACTION_AGE_UNSUPPORTED_GROUP_0 = 42,
ERR_MSG_NO_MSG
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b8da5eafba..13fad2760a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -54,6 +54,30 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+#define MAX_MATCH_FIELDS 16
+
+struct match_elem_s {
+ int masked_for_tcam; /* if potentially selected for TCAM */
+ uint32_t e_word[4];
+ uint32_t e_mask[4];
+
+ int extr_start_offs_id;
+ int8_t rel_offs;
+ uint32_t word_len;
+};
+
+struct km_flow_def_s {
+ struct flow_api_backend_s *be;
+
+ /* For collect flow elements and sorting */
+ struct match_elem_s match[MAX_MATCH_FIELDS];
+ int num_ftype_elem;
+
+ /* Flow information */
+ /* HW input port ID needed for compare. In port must be identical on flow types */
+ uint32_t port_id;
+};
+
enum flow_port_type_e {
PORT_NONE, /* not defined or drop */
PORT_INTERNAL, /* no queues attached */
@@ -99,6 +123,25 @@ struct nic_flow_def {
uint32_t jump_to_group;
int full_offload;
+
+ /*
+ * Modify field
+ */
+ struct {
+ uint32_t select;
+ union {
+ uint8_t value8[16];
+ uint16_t value16[8];
+ uint32_t value32[4];
+ };
+ } modify_field[MAX_CPY_WRITERS_SUPPORTED];
+
+ uint32_t modify_field_count;
+
+ /*
+ * Key Matcher flow definitions
+ */
+ struct km_flow_def_s km;
};
enum flow_handle_type {
@@ -159,6 +202,9 @@ struct flow_handle {
void km_free_ndev_resource_management(void **handle);
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start, int8_t offset);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 34154c65f8..99b207a01c 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -133,6 +133,39 @@ enum km_flm_if_select_e {
unsigned int alloced_size; \
int debug
+enum {
+ PROT_OTHER = 0,
+ PROT_L2_ETH2 = 1,
+};
+
+enum {
+ PROT_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_L4_ICMP = 4
+};
+
+enum {
+ PROT_TUN_L3_OTHER = 0,
+ PROT_TUN_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_ICMP = 4
+};
+
+
+enum {
+ CPY_SELECT_DSCP_IPV4 = 0,
+ CPY_SELECT_DSCP_IPV6 = 1,
+ CPY_SELECT_RQI_QFI = 2,
+ CPY_SELECT_IPV4 = 3,
+ CPY_SELECT_PORT = 4,
+ CPY_SELECT_TEID = 5,
+};
+
struct common_func_s {
COMMON_FUNC_INFO_S;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index e04cd5e857..237e9f7b4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -3,10 +3,38 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <assert.h>
#include <stdlib.h>
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
+#include "nt_util.h"
+
+#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+
+static const struct cam_match_masks_s {
+ uint32_t word_len;
+ uint32_t key_mask[4];
+} cam_masks[] = {
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff } }, /* IP6_SRC, IP6_DST */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffff0000 } }, /* DMAC,SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffff0000, 0x00000000, 0xffff0000 } }, /* DMAC,ethtype */
+ { 4, { 0x00000000, 0x0000ffff, 0xffffffff, 0xffff0000 } }, /* SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000 } }, /* ETH_128 */
+ { 2, { 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 } }, /* IP4_COMBINED */
+ /*
+ * ETH_TYPE, IP4_TTL_PROTO, IP4_SRC, IP4_DST, IP6_FLOW_TC,
+ * IP6_NEXT_HDR_HOP, TP_PORT_COMBINED, SIDEBAND_VNI
+ */
+ { 1, { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IP4_IHL_TOS, TP_PORT_SRC32_OR_ICMP, TCP_CTRL */
+ { 1, { 0xffff0000, 0x00000000, 0x00000000, 0x00000000 } },
+ { 1, { 0x0000ffff, 0x00000000, 0x00000000, 0x00000000 } }, /* TP_PORT_DST32 */
+ /* IPv4 TOS mask bits used often by OVS */
+ { 1, { 0x00030000, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IPv6 TOS mask bits used often by OVS */
+ { 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
+};
void km_free_ndev_resource_management(void **handle)
{
@@ -17,3 +45,56 @@ void km_free_ndev_resource_management(void **handle)
*handle = NULL;
}
+
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start_id, int8_t offset)
+{
+ /* valid word_len 1,2,4 */
+ if (word_len == 3) {
+ word_len = 4;
+ e_word[3] = 0;
+ e_mask[3] = 0;
+ }
+
+ if (word_len < 1 || word_len > 4) {
+ assert(0);
+ return -1;
+ }
+
+ for (unsigned int i = 0; i < word_len; i++) {
+ km->match[km->num_ftype_elem].e_word[i] = e_word[i];
+ km->match[km->num_ftype_elem].e_mask[i] = e_mask[i];
+ }
+
+ km->match[km->num_ftype_elem].word_len = word_len;
+ km->match[km->num_ftype_elem].rel_offs = offset;
+ km->match[km->num_ftype_elem].extr_start_offs_id = start_id;
+
+ /*
+ * Determine here if this flow may better be put into TCAM
+ * Otherwise it will go into CAM
+ * This is dependent on a cam_masks list defined above
+ */
+ km->match[km->num_ftype_elem].masked_for_tcam = 1;
+
+ for (unsigned int msk = 0; msk < NUM_CAM_MASKS; msk++) {
+ if (word_len == cam_masks[msk].word_len) {
+ int match = 1;
+
+ for (unsigned int wd = 0; wd < word_len; wd++) {
+ if (e_mask[wd] != cam_masks[msk].key_mask[wd]) {
+ match = 0;
+ break;
+ }
+ }
+
+ if (match) {
+ /* Can go into CAM */
+ km->match[km->num_ftype_elem].masked_for_tcam = 0;
+ }
+ }
+ }
+
+ km->num_ftype_elem++;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7f9869a511..0f136ee164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -416,10 +416,67 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return 0;
}
-static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
- uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
- uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
- uint32_t priority __rte_unused)
+static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def *fd,
+ const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
+ uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
+{
+ switch (fd->l4_prot) {
+ case PROT_L4_ICMP:
+ fh->flm_prot = fd->ip_prot;
+ break;
+
+ default:
+ switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_ICMP:
+ fh->flm_prot = fd->tunnel_ip_prot;
+ break;
+
+ default:
+ fh->flm_prot = 0;
+ break;
+ }
+
+ break;
+ }
+
+ memcpy(fh->flm_data, packet_data, sizeof(uint32_t) * 10);
+
+ fh->flm_kid = flm_key_id;
+ fh->flm_rpl_ext_ptr = rpl_ext_ptr;
+ fh->flm_prio = (uint8_t)priority;
+ fh->flm_ft = (uint8_t)flm_ft;
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+ case CPY_SELECT_RQI_QFI:
+ fh->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ fh->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ fh->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ fh->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ fh->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
+ uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
+ uint32_t flm_scrub, uint32_t priority)
{
struct nic_flow_def *fd;
struct flow_handle fh_copy;
@@ -443,6 +500,9 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
for (int i = 0; i < RES_COUNT; ++i)
fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+ copy_fd_to_fh_flm(fh, fd, packet_data, flm_key_id, flm_ft, rpl_ext_ptr, flm_scrub,
+ priority);
+
free(fd);
return 0;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 10/73] net/ntnic: add action queue
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (8 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 11/73] net/ntnic: add action mark Serhii Iliushyk
` (63 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_QUEUE
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 37 +++++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 1c653fd5a0..5b3c26da05 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,3 +18,4 @@ any = Y
[rte_flow actions]
port_id = Y
+queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0f136ee164..a3fe2fe902 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -23,6 +23,15 @@
static void *flm_lrn_queue_arr;
+static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
+{
+ for (int i = 0; i < dev->num_queues; ++i)
+ if (dev->rx_queue[i].id == id)
+ return dev->rx_queue[i].hw_id;
+
+ return -1;
+}
+
struct flm_flow_key_def_s {
union {
struct {
@@ -349,6 +358,34 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_queue queue_tmp;
+ const struct rte_flow_action_queue *queue =
+ memcpy_mask_if(&queue_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_queue));
+
+ int hw_id = rx_queue_idx_to_hw_id(dev, queue->index);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE port %u, queue index: %u, hw id %u",
+ dev, dev->port, queue->index, hw_id);
+
+ fd->full_offload = 0;
+ *num_queues += 1;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 11/73] net/ntnic: add action mark
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (9 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 10/73] net/ntnic: add action queue Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 12/73] net/ntnic: add ation jump Serhii Iliushyk
` (62 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_MARK
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 5b3c26da05..42ac9f9c31 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,5 +17,6 @@ x86-64 = Y
any = Y
[rte_flow actions]
+mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a3fe2fe902..96b7192edc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -386,6 +386,22 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_mark mark_tmp;
+ const struct rte_flow_action_mark *mark =
+ memcpy_mask_if(&mark_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_mark));
+
+ fd->mark = mark->id;
+ NT_LOG(DBG, FILTER, "Mark: %i", mark->id);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 12/73] net/ntnic: add ation jump
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (10 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 11/73] net/ntnic: add action mark Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 13/73] net/ntnic: add action drop Serhii Iliushyk
` (61 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_JUMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 42ac9f9c31..f3334fc86d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+jump = Y
mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 96b7192edc..603039374a 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -402,6 +402,23 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_jump jump_tmp;
+ const struct rte_flow_action_jump *jump =
+ memcpy_mask_if(&jump_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_jump));
+
+ fd->jump_to_group = jump->group;
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP: group %u",
+ dev, jump->group);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 13/73] net/ntnic: add action drop
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (11 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 12/73] net/ntnic: add ation jump Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 14/73] net/ntnic: add item eth Serhii Iliushyk
` (60 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_DROP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index f3334fc86d..372653695d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+drop = Y
jump = Y
mark = Y
port_id = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 603039374a..64168fcc7d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -419,6 +419,18 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_DROP", dev);
+
+ if (action[aidx].conf) {
+ fd->dst_id[fd->dst_num_avail].owning_port_id = 0;
+ fd->dst_id[fd->dst_num_avail].id = 0;
+ fd->dst_id[fd->dst_num_avail].type = PORT_NONE;
+ fd->dst_num_avail++;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 14/73] net/ntnic: add item eth
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (12 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 13/73] net/ntnic: add action drop Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
` (59 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ETH
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 23 +++
.../profile_inline/flow_api_profile_inline.c | 180 ++++++++++++++++++
3 files changed, 204 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 372653695d..36b8212bae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -15,6 +15,7 @@ x86-64 = Y
[rte_flow items]
any = Y
+eth = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 99b207a01c..0c22129fb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,29 @@ enum {
} \
} while (0)
+static inline int is_non_zero(const void *addr, size_t n)
+{
+ size_t i = 0;
+ const uint8_t *p = (const uint8_t *)addr;
+
+ for (i = 0; i < n; i++)
+ if (p[i] != 0)
+ return 1;
+
+ return 0;
+}
+
+enum frame_offs_e {
+ DYN_L2 = 1,
+ DYN_L3 = 4,
+ DYN_L4 = 7,
+ DYN_L4_PAYLOAD = 8,
+ DYN_TUN_L3 = 13,
+ DYN_TUN_L4 = 16,
+};
+
+/* Sideband info bit indicator */
+
enum km_flm_if_select_e {
KM_FLM_IF_FIRST = 0,
KM_FLM_IF_SECOND = 1
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 64168fcc7d..93f666a054 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -55,6 +55,36 @@ struct flm_flow_key_def_s {
/*
* Flow Matcher functionality
*/
+static inline void set_key_def_qw(struct flm_flow_key_def_s *key_def, unsigned int qw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(qw < 2);
+
+ if (qw == 0) {
+ key_def->qw0_dyn = dyn & 0x7f;
+ key_def->qw0_ofs = ofs & 0xff;
+
+ } else {
+ key_def->qw4_dyn = dyn & 0x7f;
+ key_def->qw4_ofs = ofs & 0xff;
+ }
+}
+
+static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned int sw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(sw < 2);
+
+ if (sw == 0) {
+ key_def->sw8_dyn = dyn & 0x7f;
+ key_def->sw8_ofs = ofs & 0xff;
+
+ } else {
+ key_def->sw9_dyn = dyn & 0x7f;
+ key_def->sw9_ofs = ofs & 0xff;
+ }
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -457,6 +487,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
+ uint32_t any_count = 0;
+
+ unsigned int qw_counter = 0;
+ unsigned int sw_counter = 0;
+
*in_port_id = UINT32_MAX;
memset(packet_data, 0x0, sizeof(uint32_t) * 10);
@@ -472,6 +507,28 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH: {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (eth_spec != NULL && eth_mask != NULL) {
+ if (is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6)) {
+ qw_reserved_mac += 1;
+ }
+ }
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
if (qw_free < 0) {
@@ -484,6 +541,129 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
switch (elem[eidx].type) {
case RTE_FLOW_ITEM_TYPE_ANY:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ any_count += 1;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ETH",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (any_count > 0) {
+ NT_LOG(ERR, FILTER,
+ "Tunneled L2 ethernet not supported");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (eth_spec == NULL || eth_mask == NULL) {
+ fd->l2_prot = PROT_L2_ETH2;
+ break;
+ }
+
+ int non_zero = is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6);
+
+ if (non_zero ||
+ (eth_mask->ether_type != 0 && sw_counter >= 2)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ((eth_spec->dst_addr.addr_bytes[0] &
+ eth_mask->dst_addr.addr_bytes[0]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[1] &
+ eth_mask->dst_addr.addr_bytes[1]) << 16) +
+ ((eth_spec->dst_addr.addr_bytes[2] &
+ eth_mask->dst_addr.addr_bytes[2]) << 8) +
+ (eth_spec->dst_addr.addr_bytes[3] &
+ eth_mask->dst_addr.addr_bytes[3]);
+
+ qw_data[1] = ((eth_spec->dst_addr.addr_bytes[4] &
+ eth_mask->dst_addr.addr_bytes[4]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[5] &
+ eth_mask->dst_addr.addr_bytes[5]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[0] &
+ eth_mask->src_addr.addr_bytes[0]) << 8) +
+ (eth_spec->src_addr.addr_bytes[1] &
+ eth_mask->src_addr.addr_bytes[1]);
+
+ qw_data[2] = ((eth_spec->src_addr.addr_bytes[2] &
+ eth_mask->src_addr.addr_bytes[2]) << 24) +
+ ((eth_spec->src_addr.addr_bytes[3] &
+ eth_mask->src_addr.addr_bytes[3]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[4] &
+ eth_mask->src_addr.addr_bytes[4]) << 8) +
+ (eth_spec->src_addr.addr_bytes[5] &
+ eth_mask->src_addr.addr_bytes[5]);
+
+ qw_data[3] = ntohs(eth_spec->ether_type &
+ eth_mask->ether_type) << 16;
+
+ qw_mask[0] = (eth_mask->dst_addr.addr_bytes[0] << 24) +
+ (eth_mask->dst_addr.addr_bytes[1] << 16) +
+ (eth_mask->dst_addr.addr_bytes[2] << 8) +
+ eth_mask->dst_addr.addr_bytes[3];
+
+ qw_mask[1] = (eth_mask->dst_addr.addr_bytes[4] << 24) +
+ (eth_mask->dst_addr.addr_bytes[5] << 16) +
+ (eth_mask->src_addr.addr_bytes[0] << 8) +
+ eth_mask->src_addr.addr_bytes[1];
+
+ qw_mask[2] = (eth_mask->src_addr.addr_bytes[2] << 24) +
+ (eth_mask->src_addr.addr_bytes[3] << 16) +
+ (eth_mask->src_addr.addr_bytes[4] << 8) +
+ eth_mask->src_addr.addr_bytes[5];
+
+ qw_mask[3] = ntohs(eth_mask->ether_type) << 16;
+
+ km_add_match_elem(&fd->km,
+ &qw_data[(size_t)(qw_counter * 4)],
+ &qw_mask[(size_t)(qw_counter * 4)], 4, DYN_L2, 0);
+ set_key_def_qw(key_def, qw_counter, DYN_L2, 0);
+ qw_counter += 1;
+
+ if (!non_zero)
+ qw_free -= 1;
+
+ } else if (eth_mask->ether_type != 0) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(eth_mask->ether_type) << 16;
+ sw_data[0] = ntohs(eth_spec->ether_type) << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, DYN_L2, 12);
+ set_key_def_sw(key_def, sw_counter, DYN_L2, 12);
+ sw_counter += 1;
+ }
+
+ fd->l2_prot = PROT_L2_ETH2;
+ }
+
+ break;
+
dev->ndev->adapter_no, dev->port);
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 15/73] net/ntnic: add item IPv4
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (13 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 14/73] net/ntnic: add item eth Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 16/73] net/ntnic: add item ICMP Serhii Iliushyk
` (58 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 162 ++++++++++++++++++
2 files changed, 163 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 36b8212bae..bae25d2e2d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+ipv4 = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 93f666a054..d5d853351e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -664,7 +664,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv4 *ipv4_spec =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
+
+ if (ipv4_spec == NULL || ipv4_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.version_ihl != 0 ||
+ ipv4_mask->hdr.type_of_service != 0 ||
+ ipv4_mask->hdr.total_length != 0 ||
+ ipv4_mask->hdr.packet_id != 0 ||
+ (ipv4_mask->hdr.fragment_offset != 0 &&
+ (ipv4_spec->hdr.fragment_offset != 0xffff ||
+ ipv4_mask->hdr.fragment_offset != 0xffff)) ||
+ ipv4_mask->hdr.time_to_live != 0 ||
+ ipv4_mask->hdr.hdr_checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv4 field not support by running SW version.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (ipv4_spec->hdr.fragment_offset == 0xffff &&
+ ipv4_mask->hdr.fragment_offset == 0xffff) {
+ fd->fragmentation = 0xfe;
+ }
+
+ int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
+ (ipv4_mask->hdr.dst_addr != 0) +
+ (ipv4_mask->hdr.next_proto_id != 0);
+
+ if (match_cnt <= 0) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (qw_free > 0 &&
+ (match_cnt >= 2 ||
+ (match_cnt == 1 && sw_counter >= 2))) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED,
+ error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_mask[0] = 0;
+ qw_data[0] = 0;
+
+ qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
+ qw_data[1] = ipv4_spec->hdr.next_proto_id
+ << 16 & qw_mask[1];
+
+ qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
+ qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
+
+ qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
+ qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.src_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 12);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.dst_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 16);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.next_proto_id) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
+ sw_data[0] = ipv4_spec->hdr.next_proto_id
+ << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ sw_counter += 1;
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 16/73] net/ntnic: add item ICMP
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (14 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 17/73] net/ntnic: add item port ID Serhii Iliushyk
` (57 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ICMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 101 ++++++++++++++++++
2 files changed, 102 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index bae25d2e2d..d403ea01f3 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+icmp = Y
ipv4 = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d5d853351e..6bf0ff8821 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -827,6 +827,107 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp *icmp_spec =
+ (const struct rte_flow_item_icmp *)elem[eidx].spec;
+ const struct rte_flow_item_icmp *icmp_mask =
+ (const struct rte_flow_item_icmp *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->hdr.icmp_cksum != 0 ||
+ icmp_mask->hdr.icmp_ident != 0 ||
+ icmp_mask->hdr.icmp_seq_nb != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->hdr.icmp_type || icmp_mask->hdr.icmp_code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ sw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter,
+ any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 17/73] net/ntnic: add item port ID
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (15 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 16/73] net/ntnic: add item ICMP Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 18/73] net/ntnic: add item void Serhii Iliushyk
` (56 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_PORT_ID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../flow_api/profile_inline/flow_api_profile_inline.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index d403ea01f3..cdf119c4ae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,6 +18,7 @@ any = Y
eth = Y
icmp = Y
ipv4 = Y
+port_id = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6bf0ff8821..efefd52979 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -928,6 +928,17 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
+ dev->ndev->adapter_no, dev->port);
+
+ if (elem[eidx].spec) {
+ *in_port_id =
+ ((const struct rte_flow_item_port_id *)elem[eidx].spec)->id;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 18/73] net/ntnic: add item void
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (16 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 17/73] net/ntnic: add item port ID Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 19/73] net/ntnic: add item UDP Serhii Iliushyk
` (55 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_VOID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../nthw/flow_api/profile_inline/flow_api_profile_inline.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index efefd52979..e47014615e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -939,6 +939,10 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_VOID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VOID",
+ dev->ndev->adapter_no, dev->port);
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 19/73] net/ntnic: add item UDP
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (17 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 18/73] net/ntnic: add item void Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 20/73] net/ntnic: add action TCP Serhii Iliushyk
` (54 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_UDP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 103 ++++++++++++++++++
3 files changed, 106 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index cdf119c4ae..61a3d87909 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+udp = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 0c22129fb4..a95fb69870 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e47014615e..3d4bb6e1eb 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -828,6 +828,101 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_udp *udp_spec =
+ (const struct rte_flow_item_udp *)elem[eidx].spec;
+ const struct rte_flow_item_udp *udp_mask =
+ (const struct rte_flow_item_udp *)elem[eidx].mask;
+
+ if (udp_spec == NULL || udp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (udp_mask->hdr.dgram_len != 0 ||
+ udp_mask->hdr.dgram_cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested UDP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (udp_mask->hdr.src_port || udp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(udp_mask->hdr.src_port) << 16) |
+ ntohs(udp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(udp_mask->hdr.src_port)
+ << 16) | ntohs(udp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -961,12 +1056,20 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 20/73] net/ntnic: add action TCP
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (18 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 19/73] net/ntnic: add item UDP Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 21/73] net/ntnic: add action VLAN Serhii Iliushyk
` (53 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_TCP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 108 ++++++++++++++++++
3 files changed, 111 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 61a3d87909..e3c3982895 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+tcp = Y
udp = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a95fb69870..a1aa74caf5 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -177,6 +178,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 3d4bb6e1eb..f24178a164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1024,6 +1024,106 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_tcp *tcp_spec =
+ (const struct rte_flow_item_tcp *)elem[eidx].spec;
+ const struct rte_flow_item_tcp *tcp_mask =
+ (const struct rte_flow_item_tcp *)elem[eidx].mask;
+
+ if (tcp_spec == NULL || tcp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (tcp_mask->hdr.sent_seq != 0 ||
+ tcp_mask->hdr.recv_ack != 0 ||
+ tcp_mask->hdr.data_off != 0 ||
+ tcp_mask->hdr.tcp_flags != 0 ||
+ tcp_mask->hdr.rx_win != 0 ||
+ tcp_mask->hdr.cksum != 0 ||
+ tcp_mask->hdr.tcp_urp != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested TCP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (tcp_mask->hdr.src_port || tcp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ sw_data[0] =
+ ((ntohs(tcp_spec->hdr.src_port) << 16) |
+ ntohs(tcp_spec->hdr.dst_port)) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(tcp_spec->hdr.src_port)
+ << 16) | ntohs(tcp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1056,6 +1156,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_L4_UDP:
fh->flm_prot = 17;
break;
@@ -1066,6 +1170,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_TUN_L4_UDP:
fh->flm_prot = 17;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 21/73] net/ntnic: add action VLAN
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (19 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 20/73] net/ntnic: add action TCP Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 22/73] net/ntnic: add item SCTP Serhii Iliushyk
` (52 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_VLAN
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 94 +++++++++++++++++++
3 files changed, 96 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e3c3982895..8b4821d6d0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -21,6 +21,7 @@ ipv4 = Y
port_id = Y
tcp = Y
udp = Y
+vlan = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a1aa74caf5..82ac3d0ff3 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -134,6 +134,7 @@ static inline int is_non_zero(const void *addr, size_t n)
enum frame_offs_e {
DYN_L2 = 1,
+ DYN_FIRST_VLAN = 2,
DYN_L3 = 4,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f24178a164..7c1b632dc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -504,6 +504,20 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return -1;
}
+ if (implicit_vlan_vid > 0) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = 0x0fff;
+ sw_data[0] = implicit_vlan_vid & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1, DYN_FIRST_VLAN, 0);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN, 0);
+ sw_counter += 1;
+
+ fd->vlans += 1;
+ }
+
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
@@ -664,6 +678,86 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VLAN",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_vlan_hdr *vlan_spec =
+ (const struct rte_vlan_hdr *)elem[eidx].spec;
+ const struct rte_vlan_hdr *vlan_mask =
+ (const struct rte_vlan_hdr *)elem[eidx].mask;
+
+ if (vlan_spec == NULL || vlan_mask == NULL) {
+ fd->vlans += 1;
+ break;
+ }
+
+ if (!vlan_mask->vlan_tci && !vlan_mask->eth_proto)
+ break;
+
+ if (implicit_vlan_vid > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple VLANs not supported for implicit VLAN patterns.");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM,
+ error);
+ return -1;
+ }
+
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ sw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_qw(key_def, qw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ fd->vlans += 1;
+ }
+
+ break;
case RTE_FLOW_ITEM_TYPE_IPV4:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 22/73] net/ntnic: add item SCTP
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (20 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 21/73] net/ntnic: add action VLAN Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
` (51 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_SCTP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 102 ++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b4821d6d0..6691b6dce2 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+sctp = Y
tcp = Y
udp = Y
vlan = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 82ac3d0ff3..f1c57fa9fc 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -169,6 +169,7 @@ enum {
enum {
PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
+ PROT_L4_SCTP = 3,
PROT_L4_ICMP = 4
};
@@ -181,6 +182,7 @@ enum {
PROT_TUN_L4_OTHER = 0,
PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
+ PROT_TUN_L4_SCTP = 3,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7c1b632dc0..9460325cf6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1017,6 +1017,100 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ NT_LOG(DBG, FILTER, "Adap %i,Port %i:RTE_FLOW_ITEM_TYPE_SCTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_sctp *sctp_spec =
+ (const struct rte_flow_item_sctp *)elem[eidx].spec;
+ const struct rte_flow_item_sctp *sctp_mask =
+ (const struct rte_flow_item_sctp *)elem[eidx].mask;
+
+ if (sctp_spec == NULL || sctp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (sctp_mask->hdr.tag != 0 || sctp_mask->hdr.cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested SCTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (sctp_mask->hdr.src_port || sctp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -1258,6 +1352,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
@@ -1272,6 +1370,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_TUN_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 23/73] net/ntnic: add items IPv6 and ICMPv6
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (21 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 22/73] net/ntnic: add item SCTP Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 24/73] net/ntnic: add action modify filed Serhii Iliushyk
` (50 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_IPV6
* RTE_FLOW_ITEM_TYPE_ICMP6
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 27 ++
.../profile_inline/flow_api_profile_inline.c | 273 ++++++++++++++++++
4 files changed, 304 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 6691b6dce2..320d3c7e0b 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,7 +17,9 @@ x86-64 = Y
any = Y
eth = Y
icmp = Y
+icmp6 = Y
ipv4 = Y
+ipv6 = Y
port_id = Y
sctp = Y
tcp = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index f1c57fa9fc..4f381bc0ef 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -164,6 +164,7 @@ enum {
enum {
PROT_L3_IPV4 = 1,
+ PROT_L3_IPV6 = 2
};
enum {
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
+ PROT_TUN_L3_IPV6 = 2
};
enum {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 6800a8d834..2aee2ee973 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -47,6 +47,33 @@ static const struct {
} err_msg[] = {
/* 00 */ { "Operation successfully completed" },
/* 01 */ { "Operation failed" },
+ /* 02 */ { "Memory allocation failed" },
+ /* 03 */ { "Too many output destinations" },
+ /* 04 */ { "Too many output queues for RSS" },
+ /* 05 */ { "The VLAN TPID specified is not supported" },
+ /* 06 */ { "The VxLan Push header specified is not accepted" },
+ /* 07 */ { "While interpreting VxLan Pop action, could not find a destination port" },
+ /* 08 */ { "Failed in creating a HW-internal VTEP port" },
+ /* 09 */ { "Too many VLAN tag matches" },
+ /* 10 */ { "IPv6 invalid header specified" },
+ /* 11 */ { "Too many tunnel ports. HW limit reached" },
+ /* 12 */ { "Unknown or unsupported flow match element received" },
+ /* 13 */ { "Match failed because of HW limitations" },
+ /* 14 */ { "Match failed because of HW resource limitations" },
+ /* 15 */ { "Match failed because of too complex element definitions" },
+ /* 16 */ { "Action failed. To too many output destinations" },
+ /* 17 */ { "Action Output failed, due to HW resource exhaustion" },
+ /* 18 */ { "Push Tunnel Header action cannot output to multiple destination queues" },
+ /* 19 */ { "Inline action HW resource exhaustion" },
+ /* 20 */ { "Action retransmit/recirculate HW resource exhaustion" },
+ /* 21 */ { "Flow counter HW resource exhaustion" },
+ /* 22 */ { "Internal HW resource exhaustion to handle Actions" },
+ /* 23 */ { "Internal HW QSL compare failed" },
+ /* 24 */ { "Internal CAT CFN reuse failed" },
+ /* 25 */ { "Match variations too complex" },
+ /* 26 */ { "Match failed because of CAM/TCAM full" },
+ /* 27 */ { "Internal creation of a tunnel end point port failed" },
+ /* 28 */ { "Unknown or unsupported flow action received" },
/* 29 */ { "Removing flow failed" },
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9460325cf6..0b0b9f2033 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -538,6 +538,22 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6: {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec != NULL && ipv6_mask != NULL) {
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16))
+ qw_reserved_ipv6 += 1;
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16))
+ qw_reserved_ipv6 += 1;
+ }
+ }
+ break;
+
default:
break;
}
@@ -922,6 +938,164 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec == NULL || ipv6_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ break;
+ }
+
+ fd->l3_prot = PROT_L3_IPV6;
+ if (ipv6_mask->hdr.vtc_flow != 0 ||
+ ipv6_mask->hdr.payload_len != 0 ||
+ ipv6_mask->hdr.hop_limits != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv6 field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.src_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.src_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ qw_counter += 1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.dst_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 24);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 24);
+ qw_counter += 1;
+ }
+
+ if (ipv6_mask->hdr.proto != 0) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv6_mask->hdr.proto << 8;
+ sw_data[0] = ipv6_spec->hdr.proto << 8 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = 0;
+ qw_data[1] = ipv6_mask->hdr.proto << 8;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = 0;
+ qw_mask[1] = ipv6_spec->hdr.proto << 8;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_UDP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
dev->ndev->adapter_no, dev->port);
@@ -1212,6 +1386,105 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp6 *icmp_spec =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].spec;
+ const struct rte_flow_item_icmp6 *icmp_mask =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP6 field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->type || icmp_mask->code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ sw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_TCP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 24/73] net/ntnic: add action modify filed
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (22 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
` (49 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 181 ++++++++++++++++++
4 files changed, 190 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 320d3c7e0b..4201c8e8b9 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -30,5 +30,6 @@ vlan = Y
drop = Y
jump = Y
mark = Y
+modify_field = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 13fad2760a..f6557d0d20 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,10 @@ struct nic_flow_def {
*/
struct {
uint32_t select;
+ uint32_t dyn;
+ uint32_t ofs;
+ uint32_t len;
+ uint32_t level;
union {
uint8_t value8[16];
uint16_t value16[8];
@@ -137,6 +141,9 @@ struct nic_flow_def {
} modify_field[MAX_CPY_WRITERS_SUPPORTED];
uint32_t modify_field_count;
+ uint8_t ttl_sub_enable;
+ uint8_t ttl_sub_ipv4;
+ uint8_t ttl_sub_outer;
/*
* Key Matcher flow definitions
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 4f381bc0ef..6a8a38636f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -140,6 +140,7 @@ enum frame_offs_e {
DYN_L4_PAYLOAD = 8,
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
+ DYN_TUN_L4_PAYLOAD = 17,
};
/* Sideband info bit indicator */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0b0b9f2033..2cda2e8b14 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -323,6 +323,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
{
unsigned int encap_decap_order = 0;
+ uint64_t modify_field_use_flags = 0x0;
+
*num_dest_port = 0;
*num_queues = 0;
@@ -461,6 +463,185 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
+ {
+ /* Note: This copy method will not work for FLOW_FIELD_POINTER */
+ struct rte_flow_action_modify_field modify_field_tmp;
+ const struct rte_flow_action_modify_field *modify_field =
+ memcpy_mask_if(&modify_field_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_modify_field));
+
+ uint64_t modify_field_use_flag = 0;
+
+ if (modify_field->src.field != RTE_FLOW_FIELD_VALUE) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only src type VALUE is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.level > 2) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only dst level 0, 1, and 2 is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL ||
+ modify_field->dst.field == RTE_FLOW_FIELD_IPV6_HOPLIMIT) {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SUB) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SUB is supported for TTL/HOPLIMIT.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->ttl_sub_enable) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD TTL/HOPLIMIT resource already in use.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->ttl_sub_enable = 1;
+ fd->ttl_sub_ipv4 =
+ (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL)
+ ? 1
+ : 0;
+ fd->ttl_sub_outer = (modify_field->dst.level <= 1) ? 1 : 0;
+
+ } else {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SET) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SET is supported in general.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->modify_field_count >=
+ dev->ndev->be.tpe.nb_cpy_writers) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD exceeded maximum of %u MODIFY_FIELD actions.",
+ dev->ndev->be.tpe.nb_cpy_writers);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ int mod_outer = modify_field->dst.level <= 1;
+
+ switch (modify_field->dst.field) {
+ case RTE_FLOW_FIELD_IPV4_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 1;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV6_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV6;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ /*
+ * len=2 is needed because
+ * IPv6 DSCP overlaps 2 bytes.
+ */
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_PSC_QFI:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_RQI_QFI;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 14;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 12;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 16;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_SRC:
+ case RTE_FLOW_FIELD_UDP_PORT_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_DST:
+ case RTE_FLOW_FIELD_UDP_PORT_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 2;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_TEID:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_TEID;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 4;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type is not supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ modify_field_use_flag = 1
+ << fd->modify_field[fd->modify_field_count].select;
+
+ if (modify_field_use_flag & modify_field_use_flags) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type hardware resource already used.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ memcpy(fd->modify_field[fd->modify_field_count].value8,
+ modify_field->src.value, 16);
+
+ fd->modify_field[fd->modify_field_count].level =
+ modify_field->dst.level;
+
+ modify_field_use_flags |= modify_field_use_flag;
+ fd->modify_field_count += 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 25/73] net/ntnic: add items gtp and actions raw encap/decap
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (23 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 24/73] net/ntnic: add action modify filed Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 26/73] net/ntnic: add cat module Serhii Iliushyk
` (48 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_GTP
* RTE_FLOW_ITEM_TYPE_GTP_PSC
* RTE_FLOW_ACTION_TYPE_RAW_ENCAP
* RTE_FLOW_ACTION_TYPE_RAW_DECAP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 4 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/flow_api_engine.h | 40 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/include/stream_binary_flow_api.h | 22 ++
.../profile_inline/flow_api_profile_inline.c | 366 +++++++++++++++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 278 ++++++++++++-
7 files changed, 713 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4201c8e8b9..4cb9509742 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,8 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+gtp = Y
+gtp_psc = Y
icmp = Y
icmp6 = Y
ipv4 = Y
@@ -33,3 +35,5 @@ mark = Y
modify_field = Y
port_id = Y
queue = Y
+raw_decap = Y
+raw_encap = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 179542d2b2..70e6cad195 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,8 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct flow_action_raw_encap encap;
+ struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
};
@@ -52,6 +54,8 @@ enum nt_rte_flow_item_type {
};
extern rte_spinlock_t flow_lock;
+
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out);
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index f6557d0d20..b1d39b919b 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -56,6 +56,29 @@ enum res_type_e {
#define MAX_MATCH_FIELDS 16
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+ uint32_t user_port_id;
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+ uint16_t ip_csum_precalc;
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
+};
+
struct match_elem_s {
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
@@ -124,6 +147,23 @@ struct nic_flow_def {
int full_offload;
+ /*
+ * Action push tunnel
+ */
+ struct tunnel_header_s tun_hdr;
+
+ /*
+ * If DPDK RTE tunnel helper API used
+ * this holds the tunnel if used in flow
+ */
+ struct tunnel_s *tnl;
+
+ /*
+ * Header Stripper
+ */
+ int header_strip_end_dyn;
+ int header_strip_end_ofs;
+
/*
* Modify field
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6a8a38636f..1b45ea4296 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -175,6 +175,10 @@ enum {
PROT_L4_ICMP = 4
};
+enum {
+ PROT_TUN_GTPV1U = 6,
+};
+
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index d878b848c2..8097518d61 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -18,6 +18,7 @@
#define FLOW_MAX_QUEUES 128
+#define RAW_ENCAP_DECAP_ELEMS_MAX 16
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
@@ -31,6 +32,27 @@ struct flow_queue_id_s {
int hw_id;
};
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ */
+struct flow_action_raw_encap {
+ uint8_t *data;
+ uint8_t *preserve;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ */
+struct flow_action_raw_decap {
+ uint8_t *data;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
struct flow_eth_dev; /* port device */
struct flow_handle;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2cda2e8b14..9fc4908975 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -463,6 +463,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
+
+ if (action[aidx].conf) {
+ const struct flow_action_raw_encap *encap =
+ (const struct flow_action_raw_encap *)action[aidx].conf;
+ const struct flow_action_raw_encap *encap_mask = action_mask
+ ? (const struct flow_action_raw_encap *)action_mask[aidx]
+ .conf
+ : NULL;
+ const struct rte_flow_item *items = encap->items;
+
+ if (encap_decap_order != 1) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (encap->size == 0 || encap->size > 255 ||
+ encap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP data/size invalid.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 2;
+
+ fd->tun_hdr.len = (uint8_t)encap->size;
+
+ if (encap_mask) {
+ memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data,
+ encap_mask->data, fd->tun_hdr.len);
+
+ } else {
+ memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len);
+ }
+
+ while (items->type != RTE_FLOW_ITEM_TYPE_END) {
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ fd->tun_hdr.l2_len = 14;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->tun_hdr.nb_vlans += 1;
+ fd->tun_hdr.l2_len += 4;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ fd->tun_hdr.ip_version = 4;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->tun_hdr.ip_version = 6;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr);
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->tun_hdr.l4_len =
+ sizeof(struct rte_flow_item_icmp6);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 3] = 0xfd;
+ break;
+
+ default:
+ break;
+ }
+
+ items++;
+ }
+
+ if (fd->tun_hdr.nb_vlans > 3) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Encapsulation with %d vlans not supported.",
+ (int)fd->tun_hdr.nb_vlans);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ /* Convert encap data to 128-bit little endian */
+ for (size_t i = 0; i < (encap->size + 15) / 16; ++i) {
+ uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16;
+
+ for (unsigned int j = 0; j < 8; ++j) {
+ uint8_t t = data[j];
+ data[j] = data[15 - j];
+ data[15 - j] = t;
+ }
+ }
+ }
+
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev);
+
+ if (action[aidx].conf) {
+ /* Mask is N/A for RAW_DECAP */
+ const struct flow_action_raw_decap *decap =
+ (const struct flow_action_raw_decap *)action[aidx].conf;
+
+ if (encap_decap_order != 0) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (decap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_DECAP must decap something.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 1;
+
+ switch (decap->items[decap->item_count - 2].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->header_strip_end_dyn = DYN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->header_strip_end_dyn = DYN_L4;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->header_strip_end_dyn = DYN_L4_PAYLOAD;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ fd->header_strip_end_dyn = DYN_TUN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ default:
+ fd->header_strip_end_dyn = DYN_L2;
+ fd->header_strip_end_ofs = 0;
+ break;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
{
@@ -1766,6 +1962,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_hdr *gtp_spec =
+ (const struct rte_gtp_hdr *)elem[eidx].spec;
+ const struct rte_gtp_hdr *gtp_mask =
+ (const struct rte_gtp_hdr *)elem[eidx].mask;
+
+ if (gtp_spec == NULL || gtp_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_mask->gtp_hdr_info != 0 ||
+ gtp_mask->msg_type != 0 || gtp_mask->plen != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_mask->teid) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_mask->teid);
+ sw_data[0] =
+ ntohl(gtp_spec->teid) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_spec->teid);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_mask->teid);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP_PSC:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_spec =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec;
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_mask =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask;
+
+ if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_psc_mask->type != 0 ||
+ gtp_psc_mask->ext_hdr_len != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP PSC field is not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_psc_mask->qfi) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ sw_data[0] = ntohl(gtp_psc_spec->qfi) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 14);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_psc_spec->qfi);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 14);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1929,7 +2293,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
- uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index b9d723c9dd..df391b6399 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -16,6 +16,211 @@
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out)
+{
+ int hdri = 0;
+ int pkti = 0;
+
+ /* Ethernet */
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_ether_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ether_hdr);
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* VLAN */
+ while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) {
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_vlan_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_vlan_hdr);
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 3 */
+ uint8_t next_header = 0;
+
+ if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) {
+ if (size - pkti < (int)sizeof(struct rte_ipv4_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 9];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv4_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 4 */
+ int gtpu_encap = 0;
+
+ if (next_header == 1) { /* ICMP */
+ if (size - pkti < (int)sizeof(struct rte_icmp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 58) { /* ICMP6 */
+ if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 6) { /* TCP */
+ if (size - pkti < (int)sizeof(struct rte_tcp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_tcp_hdr);
+
+ } else if (next_header == 17) { /* UDP */
+ if (size - pkti < (int)sizeof(struct rte_udp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port ==
+ rte_cpu_to_be_16(RTE_GTPU_UDP_PORT);
+
+ hdri += 1;
+ pkti += sizeof(struct rte_udp_hdr);
+
+ } else if (next_header == 132) {/* SCTP */
+ if (size - pkti < (int)sizeof(struct rte_sctp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_sctp_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* GTPv1-U */
+ if (gtpu_encap) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti])
+ ->e;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr);
+
+ if (extension_present_bit) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti])
+ ->next_ext;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr_ext_word);
+
+ while (next_ext) {
+ size_t ext_len = data[pkti] * 4;
+
+ if (size - pkti < (int)ext_len)
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_ext = data[pkti + ext_len - 1];
+
+ hdri += 1;
+ pkti += ext_len;
+ }
+ }
+ }
+
+ if (size - pkti != 0)
+ return -1;
+
+interpret_end:
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_END;
+ out[hdri].spec = NULL;
+ out[hdri].mask = NULL;
+
+ return hdri + 1;
+}
+
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
{
if (error) {
@@ -95,13 +300,78 @@ int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item
return (type >= 0) ? 0 : -1;
}
-int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- int max_elem __rte_unused,
- uint32_t queue_offset __rte_unused)
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset)
{
+ int aidx = 0;
int type = -1;
+ do {
+ type = actions[aidx].type;
+ if (type >= 0) {
+ action->flow_actions[aidx].type = type;
+
+ /*
+ * Non-compatible actions handled here
+ */
+ switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
+ const struct rte_flow_action_raw_decap *decap =
+ (const struct rte_flow_action_raw_decap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(decap->data, NULL, decap->size,
+ action->decap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->decap.data = decap->data;
+ action->decap.size = decap->size;
+ action->decap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->decap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: {
+ const struct rte_flow_action_raw_encap *encap =
+ (const struct rte_flow_action_raw_encap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(encap->data, encap->preserve,
+ encap->size, action->encap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->encap.data = encap->data;
+ action->encap.preserve = encap->preserve;
+ action->encap.size = encap->size;
+ action->encap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->encap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE: {
+ const struct rte_flow_action_queue *queue =
+ (const struct rte_flow_action_queue *)actions[aidx].conf;
+ action->queue.index = queue->index + queue_offset;
+ action->flow_actions[aidx].conf = &action->queue;
+ }
+ break;
+
+ default: {
+ action->flow_actions[aidx].conf = actions[aidx].conf;
+ }
+ break;
+ }
+
+ aidx++;
+
+ if (aidx == max_elem)
+ return -1;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
return (type >= 0) ? 0 : -1;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 26/73] net/ntnic: add cat module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (24 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
` (47 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Categorizer module’s main purpose is to is select the behavior
of other modules in the FPGA pipeline depending on a protocol check.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 24 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 267 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 165 +++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 47 +++
.../profile_inline/flow_api_profile_inline.c | 83 ++++++
5 files changed, 586 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 1b45ea4296..87fc16ecb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -315,11 +315,35 @@ int hw_mod_cat_reset(struct flow_api_backend_s *be);
int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
+/* KCE/KCS/FTE KM */
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+/* KCE/KCS/FTE FLM */
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count);
+
int hw_mod_cat_kcc_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_exo_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index d266760123..9164ec1ae0 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -951,6 +951,97 @@ static int hw_mod_cat_fte_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_fte_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_fte_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ const uint32_t key_cnt = (_VER_ >= 20) ? 4 : 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8 * be->cat.nb_flow_types * key_cnt)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v18.fte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v21.fte[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, value, 1);
+}
+
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -964,6 +1055,45 @@ int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cte_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cte_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTE_ENABLE_BM:
+ GET_SET(be->cat.v18.cte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -979,6 +1109,51 @@ int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cts_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cts_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ int addr_size = (be->cat.cts_num + 1) / 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs * addr_size)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTS_CAT_A:
+ GET_SET(be->cat.v18.cts[index].cat_a, value);
+ break;
+
+ case HW_CAT_CTS_CAT_B:
+ GET_SET(be->cat.v18.cts[index].cat_b, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -992,6 +1167,98 @@ int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cot_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cot_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_COT_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->cat.v18.cot[index], (uint8_t)*value,
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->cat.v18.cot, struct cat_v18_cot_s, index, *value);
+ break;
+
+ case HW_CAT_COT_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->cat.v18.cot, struct cat_v18_cot_s, index, *value,
+ be->max_categories);
+ break;
+
+ case HW_CAT_COT_COPY_FROM:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memcpy(&be->cat.v18.cot[index], &be->cat.v18.cot[*value],
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COLOR:
+ GET_SET(be->cat.v18.cot[index].color, value);
+ break;
+
+ case HW_CAT_COT_KM:
+ GET_SET(be->cat.v18.cot[index].km, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cot_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4ea9387c80..addd5f288f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -22,6 +22,14 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
+ /* Items */
+ struct hw_db_inline_resource_db_cat {
+ struct hw_db_inline_cat_data data;
+ int ref;
+ } *cat;
+
+ uint32_t nb_cat;
+
/* Hardware */
struct hw_db_inline_resource_db_cfn {
@@ -47,6 +55,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_cat = ndev->be.cat.nb_cat_funcs;
+ db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
+
+ if (db->cat == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -56,6 +72,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->cat);
free(db->cfn);
@@ -70,6 +87,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_CAT:
+ hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_COT:
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
@@ -80,6 +101,69 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+/******************************************************************************/
+/* Filter */
+/******************************************************************************/
+
+/*
+ * Setup a filter to match:
+ * All packets in CFN checks
+ * All packets in KM
+ * All packets in FLM with look-up C FT equal to specified argument
+ *
+ * Setup a QSL recipe to DROP all matching packets
+ *
+ * Note: QSL recipe 0 uses DISCARD in order to allow for exception paths (UNMQ)
+ * Consequently another QSL recipe with hard DROP is needed
+ */
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id)
+{
+ (void)ft;
+ (void)qsl_hw_id;
+
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+ (void)offset;
+
+ /* Select and enable QSL recipe */
+ if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
+ return -1;
+
+ if (hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6))
+ return -1;
+
+ if (hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0x8))
+ return -1;
+
+ if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ /* Make all CFN checks TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, 0x0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x1))
+ return -1;
+
+ /* Final match: look-up_A == TRUE && look-up_C == TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3))
+ return -1;
+
+ if (hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ return 0;
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -150,3 +234,84 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
db->cot[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* CAT */
+/******************************************************************************/
+
+static int hw_db_inline_cat_compare(const struct hw_db_inline_cat_data *data1,
+ const struct hw_db_inline_cat_data *data2)
+{
+ return data1->vlan_mask == data2->vlan_mask &&
+ data1->mac_port_mask == data2->mac_port_mask &&
+ data1->ptc_mask_frag == data2->ptc_mask_frag &&
+ data1->ptc_mask_l2 == data2->ptc_mask_l2 &&
+ data1->ptc_mask_l3 == data2->ptc_mask_l3 &&
+ data1->ptc_mask_l4 == data2->ptc_mask_l4 &&
+ data1->ptc_mask_tunnel == data2->ptc_mask_tunnel &&
+ data1->ptc_mask_l3_tunnel == data2->ptc_mask_l3_tunnel &&
+ data1->ptc_mask_l4_tunnel == data2->ptc_mask_l4_tunnel &&
+ data1->err_mask_ttl_tunnel == data2->err_mask_ttl_tunnel &&
+ data1->err_mask_ttl == data2->err_mask_ttl && data1->ip_prot == data2->ip_prot &&
+ data1->ip_prot_tunnel == data2->ip_prot_tunnel;
+}
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cat_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_CAT;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ int ref = db->cat[i].ref;
+
+ if (ref > 0 && hw_db_inline_cat_compare(data, &db->cat[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cat_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cat[idx.ids].ref = 1;
+ memcpy(&db->cat[idx.ids].data, data, sizeof(struct hw_db_inline_cat_data));
+
+ return idx;
+}
+
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cat[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cat[idx.ids].ref -= 1;
+
+ if (db->cat[idx.ids].ref <= 0) {
+ memset(&db->cat[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cat_data));
+ db->cat[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 0116af015d..38502ac1ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,12 +36,37 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_cat_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
+ HW_DB_IDX_TYPE_CAT,
};
/* Functionality data types */
+struct hw_db_inline_cat_data {
+ uint32_t vlan_mask : 4;
+ uint32_t mac_port_mask : 8;
+ uint32_t ptc_mask_frag : 4;
+ uint32_t ptc_mask_l2 : 7;
+ uint32_t ptc_mask_l3 : 3;
+ uint32_t ptc_mask_l4 : 5;
+ uint32_t padding0 : 1;
+
+ uint32_t ptc_mask_tunnel : 11;
+ uint32_t ptc_mask_l3_tunnel : 3;
+ uint32_t ptc_mask_l4_tunnel : 5;
+ uint32_t err_mask_ttl_tunnel : 2;
+ uint32_t err_mask_ttl : 2;
+ uint32_t padding1 : 9;
+
+ uint8_t ip_prot;
+ uint8_t ip_prot_tunnel;
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -70,6 +95,16 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ };
+ };
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -84,4 +119,16 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+/**/
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data);
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+
+/**/
+
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9fc4908975..5176464054 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2347,6 +2351,67 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ struct hw_db_inline_action_set_data action_set_data = { 0 };
+ (void)action_set_data;
+
+ if (fd->jump_to_group != UINT32_MAX) {
+ /* Action Set only contains jump */
+ action_set_data.contains_jump = 1;
+ action_set_data.jump = fd->jump_to_group;
+
+ } else {
+ /* Action Set doesn't contain jump */
+ action_set_data.contains_jump = 0;
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = 0,
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
+ &cot_data);
+ fh->db_idxs[fh->db_idx_counter++] = cot_idx.raw;
+ action_set_data.cot = cot_idx;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
+
+ /* Setup CAT */
+ struct hw_db_inline_cat_data cat_data = {
+ .vlan_mask = (0xf << fd->vlans) & 0xf,
+ .mac_port_mask = 1 << fh->port_id,
+ .ptc_mask_frag = fd->fragmentation,
+ .ptc_mask_l2 = fd->l2_prot != -1 ? (1 << fd->l2_prot) : -1,
+ .ptc_mask_l3 = fd->l3_prot != -1 ? (1 << fd->l3_prot) : -1,
+ .ptc_mask_l4 = fd->l4_prot != -1 ? (1 << fd->l4_prot) : -1,
+ .err_mask_ttl = (fd->ttl_sub_enable &&
+ fd->ttl_sub_outer) ? -1 : 0x1,
+ .ptc_mask_tunnel = fd->tunnel_prot !=
+ -1 ? (1 << fd->tunnel_prot) : -1,
+ .ptc_mask_l3_tunnel =
+ fd->tunnel_l3_prot != -1 ? (1 << fd->tunnel_l3_prot) : -1,
+ .ptc_mask_l4_tunnel =
+ fd->tunnel_l4_prot != -1 ? (1 << fd->tunnel_l4_prot) : -1,
+ .err_mask_ttl_tunnel =
+ (fd->ttl_sub_enable && !fd->ttl_sub_outer) ? -1 : 0x1,
+ .ip_prot = fd->ip_prot,
+ .ip_prot_tunnel = fd->tunnel_ip_prot,
+ };
+ struct hw_db_cat_idx cat_idx =
+ hw_db_inline_cat_add(dev->ndev, dev->ndev->hw_db_handle, &cat_data);
+ fh->db_idxs[fh->db_idx_counter++] = cat_idx.raw;
+
+ if (cat_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference CAT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2379,6 +2444,20 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* COT is locked to CFN. Don't set color for CFN 0 */
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+
+ if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ /* Setup filter using matching all packets violating traffic policing parameters */
+ flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+
+ if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE,
+ NT_VIOLATING_MBR_QSL) < 0)
+ goto err_exit0;
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -2413,6 +2492,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PRESET_ALL, 0, 0, 0);
+ hw_mod_cat_cfn_flush(&ndev->be, 0, 1);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+ hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
hw_mod_tpe_reset(&ndev->be);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 27/73] net/ntnic: add SLC LR module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (25 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 26/73] net/ntnic: add cat module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 28/73] net/ntnic: add PDB module Serhii Iliushyk
` (46 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 104 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 19 ++++
.../profile_inline/flow_api_profile_inline.c | 37 ++++++-
5 files changed, 257 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 87fc16ecb4..2711f44083 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -697,6 +697,8 @@ int hw_mod_slc_lr_alloc(struct flow_api_backend_s *be);
void hw_mod_slc_lr_free(struct flow_api_backend_s *be);
int hw_mod_slc_lr_reset(struct flow_api_backend_s *be);
int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value);
struct pdb_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
index 1d878f3f96..30e5e38690 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
@@ -66,3 +66,103 @@ int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int co
return be->iface->slc_lr_rcp_flush(be->be_dev, &be->slc_lr, start_idx, count);
}
+
+static int hw_mod_slc_lr_rcp_mod(struct flow_api_backend_s *be, enum hw_slc_lr_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 2:
+ switch (field) {
+ case HW_SLC_LR_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->slc_lr.v2.rcp[index], (uint8_t)*value,
+ sizeof(struct hw_mod_slc_lr_v2_s));
+ break;
+
+ case HW_SLC_LR_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value, be->max_categories);
+ break;
+
+ case HW_SLC_LR_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].head_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].tail_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_PCAP:
+ GET_SET(be->slc_lr.v2.rcp[index].pcap, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_slc_lr_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index addd5f288f..b17bce3745 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,7 +20,13 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_slc_lr {
+ struct hw_db_inline_slc_lr_data data;
+ int ref;
+ } *slc_lr;
+
uint32_t nb_cot;
+ uint32_t nb_slc_lr;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -55,6 +61,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_slc_lr = ndev->be.max_categories;
+ db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
+
+ if (db->slc_lr == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -72,6 +86,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->slc_lr);
free(db->cat);
free(db->cfn);
@@ -95,6 +110,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_SLC_LR:
+ hw_db_inline_slc_lr_deref(ndev, db_handle,
+ *(struct hw_db_slc_lr_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -235,6 +255,90 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* SLC_LR */
+/******************************************************************************/
+
+static int hw_db_inline_slc_lr_compare(const struct hw_db_inline_slc_lr_data *data1,
+ const struct hw_db_inline_slc_lr_data *data2)
+{
+ if (!data1->head_slice_en)
+ return data1->head_slice_en == data2->head_slice_en;
+
+ return data1->head_slice_en == data2->head_slice_en &&
+ data1->head_slice_dyn == data2->head_slice_dyn &&
+ data1->head_slice_ofs == data2->head_slice_ofs;
+}
+
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_slc_lr_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_SLC_LR;
+
+ for (uint32_t i = 1; i < db->nb_slc_lr; ++i) {
+ int ref = db->slc_lr[i].ref;
+
+ if (ref > 0 && hw_db_inline_slc_lr_compare(data, &db->slc_lr[i].data)) {
+ idx.ids = i;
+ hw_db_inline_slc_lr_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->slc_lr[idx.ids].ref = 1;
+ memcpy(&db->slc_lr[idx.ids].data, data, sizeof(struct hw_db_inline_slc_lr_data));
+
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_SLC_EN, idx.ids, data->head_slice_en);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_DYN, idx.ids, data->head_slice_dyn);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_OFS, idx.ids, data->head_slice_ofs);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->slc_lr[idx.ids].ref += 1;
+}
+
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->slc_lr[idx.ids].ref -= 1;
+
+ if (db->slc_lr[idx.ids].ref <= 0) {
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->slc_lr[idx.ids].data, 0x0, sizeof(struct hw_db_inline_slc_lr_data));
+ db->slc_lr[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 38502ac1ec..ef63336b1c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -40,10 +40,15 @@ struct hw_db_cat_idx {
HW_DB_IDX;
};
+struct hw_db_slc_lr_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_SLC_LR,
};
/* Functionality data types */
@@ -89,6 +94,13 @@ struct hw_db_inline_cot_data {
uint32_t padding : 24;
};
+struct hw_db_inline_slc_lr_data {
+ uint32_t head_slice_en : 1;
+ uint32_t head_slice_dyn : 5;
+ uint32_t head_slice_ofs : 8;
+ uint32_t padding : 18;
+};
+
struct hw_db_inline_hsh_data {
uint32_t func;
uint64_t hash_mask;
@@ -119,6 +131,13 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data);
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5176464054..73fab083de 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2277,18 +2277,38 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
-static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
- const struct nic_flow_def *fd __rte_unused,
+static int setup_flow_flm_actions(struct flow_eth_dev *dev,
+ const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
- uint32_t local_idxs[] __rte_unused,
- uint32_t *local_idx_counter __rte_unused,
+ uint32_t local_idxs[],
+ uint32_t *local_idx_counter,
uint16_t *flm_rpl_ext_ptr __rte_unused,
uint32_t *flm_ft __rte_unused,
uint32_t *flm_scrub __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error)
{
+ /* Setup SLC LR */
+ struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
+
+ if (fd->header_strip_end_dyn != 0 || fd->header_strip_end_ofs != 0) {
+ struct hw_db_inline_slc_lr_data slc_lr_data = {
+ .head_slice_en = 1,
+ .head_slice_dyn = fd->header_strip_end_dyn,
+ .head_slice_ofs = fd->header_strip_end_ofs,
+ };
+ slc_lr_idx =
+ hw_db_inline_slc_lr_add(dev->ndev, dev->ndev->hw_db_handle, &slc_lr_data);
+ local_idxs[(*local_idx_counter)++] = slc_lr_idx.raw;
+
+ if (slc_lr_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference SLC LR resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -2450,6 +2470,9 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* SLC LR index 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2498,6 +2521,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
+
hw_mod_tpe_reset(&ndev->be);
flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 28/73] net/ntnic: add PDB module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (26 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 29/73] net/ntnic: add QSL module Serhii Iliushyk
` (45 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Packet Description Builder module creates packet meta-data
for example virtio-net headers.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 17 +++
3 files changed, 164 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 2711f44083..7f1449d8ee 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -740,6 +740,9 @@ int hw_mod_pdb_alloc(struct flow_api_backend_s *be);
void hw_mod_pdb_free(struct flow_api_backend_s *be);
int hw_mod_pdb_reset(struct flow_api_backend_s *be);
int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value);
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be);
struct tpe_func_s {
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
index c3facacb08..59285405ba 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
@@ -85,6 +85,150 @@ int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->pdb_rcp_flush(be->be_dev, &be->pdb, start_idx, count);
}
+static int hw_mod_pdb_rcp_mod(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 9:
+ switch (field) {
+ case HW_PDB_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->pdb.v9.rcp[index], (uint8_t)*value,
+ sizeof(struct pdb_v9_rcp_s));
+ break;
+
+ case HW_PDB_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value,
+ be->pdb.nb_pdb_rcp_categories);
+ break;
+
+ case HW_PDB_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value);
+ break;
+
+ case HW_PDB_RCP_DESCRIPTOR:
+ GET_SET(be->pdb.v9.rcp[index].descriptor, value);
+ break;
+
+ case HW_PDB_RCP_DESC_LEN:
+ GET_SET(be->pdb.v9.rcp[index].desc_len, value);
+ break;
+
+ case HW_PDB_RCP_TX_PORT:
+ GET_SET(be->pdb.v9.rcp[index].tx_port, value);
+ break;
+
+ case HW_PDB_RCP_TX_IGNORE:
+ GET_SET(be->pdb.v9.rcp[index].tx_ignore, value);
+ break;
+
+ case HW_PDB_RCP_TX_NOW:
+ GET_SET(be->pdb.v9.rcp[index].tx_now, value);
+ break;
+
+ case HW_PDB_RCP_CRC_OVERWRITE:
+ GET_SET(be->pdb.v9.rcp[index].crc_overwrite, value);
+ break;
+
+ case HW_PDB_RCP_ALIGN:
+ GET_SET(be->pdb.v9.rcp[index].align, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs0_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs0_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs1_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs1_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs2_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs2_rel, value);
+ break;
+
+ case HW_PDB_RCP_IP_PROT_TNL:
+ GET_SET(be->pdb.v9.rcp[index].ip_prot_tnl, value);
+ break;
+
+ case HW_PDB_RCP_PPC_HSH:
+ GET_SET(be->pdb.v9.rcp[index].ppc_hsh, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_EN:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_en, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_BIT:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_bit, value);
+ break;
+
+ case HW_PDB_RCP_PCAP_KEEP_FCS:
+ GET_SET(be->pdb.v9.rcp[index].pcap_keep_fcs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 9 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_pdb_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be)
{
return be->iface->pdb_config_flush(be->be_dev, &be->pdb);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 73fab083de..1eab579142 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2473,6 +2473,19 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ /* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
+ */
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESCRIPTOR, 0, 7) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESC_LEN, 0, 6) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2530,6 +2543,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+ hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_PRESET_ALL, 0, 0);
+ hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 29/73] net/ntnic: add QSL module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (27 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 28/73] net/ntnic: add PDB module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 30/73] net/ntnic: add KM module Serhii Iliushyk
` (44 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Queue Selector module directs packets to a given destination
which includes host queues, physical ports, exceptions paths, and discard.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/hw_mod_backend.h | 8 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 65 ++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 96 +++++++-
7 files changed, 595 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 7f031ccda8..edffd0a57a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -184,8 +184,11 @@ extern const char *dbg_res_descr[];
int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
uint32_t alignment);
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment);
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
#endif
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7f1449d8ee..6fa2a3d94f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -666,8 +666,16 @@ int hw_mod_qsl_alloc(struct flow_api_backend_s *be);
void hw_mod_qsl_free(struct flow_api_backend_s *be);
int hw_mod_qsl_reset(struct flow_api_backend_s *be);
int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value);
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_qsl_unmq_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
uint32_t value);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 2aee2ee973..a51d621ef9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -106,11 +106,52 @@ int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return -1;
}
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment)
+{
+ unsigned int idx_offs;
+
+ for (unsigned int res_idx = 0; res_idx < ndev->res[res_type].resource_count - (num - 1);
+ res_idx += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, res_idx)) {
+ for (idx_offs = 1; idx_offs < num; idx_offs++)
+ if (flow_nic_is_resource_used(ndev, res_type, res_idx + idx_offs))
+ break;
+
+ if (idx_offs < num)
+ continue;
+
+ /* found a contiguous number of "num" res_type elements - allocate them */
+ for (idx_offs = 0; idx_offs < num; idx_offs++) {
+ flow_nic_mark_resource_used(ndev, res_type, res_idx + idx_offs);
+ ndev->res[res_type].ref[res_idx + idx_offs] = 1;
+ }
+
+ return res_idx;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
}
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
+{
+ NT_LOG(DBG, FILTER, "Reference resource %s idx %i (before ref cnt %i)",
+ dbg_res_descr[res_type], index, ndev->res[res_type].ref[index]);
+ assert(flow_nic_is_resource_used(ndev, res_type, index));
+
+ if (ndev->res[res_type].ref[index] == (uint32_t)-1)
+ return -1;
+
+ ndev->res[res_type].ref[index]++;
+ return 0;
+}
+
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
{
NT_LOG(DBG, FILTER, "De-reference resource %s idx %i (before ref cnt %i)",
@@ -348,6 +389,18 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 0);
hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1);
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (int i = 0; i < eth_dev->num_queues; ++i) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value & ~(1U << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
#ifdef FLOW_DEBUG
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
@@ -580,6 +633,18 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->rss_target_id = -1;
+ if (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (i = 0; i < eth_dev->num_queues; i++) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value | (1 << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
*rss_target_id = eth_dev->rss_target_id;
nic_insert_eth_port_dev(ndev, eth_dev);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
index 93b37d595e..70fe97a298 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
@@ -104,6 +104,114 @@ int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_rcp_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_rcp_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.rcp[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_rcp_s));
+ break;
+
+ case HW_QSL_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value,
+ be->qsl.nb_rcp_categories);
+ break;
+
+ case HW_QSL_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value);
+ break;
+
+ case HW_QSL_RCP_DISCARD:
+ GET_SET(be->qsl.v7.rcp[index].discard, value);
+ break;
+
+ case HW_QSL_RCP_DROP:
+ GET_SET(be->qsl.v7.rcp[index].drop, value);
+ break;
+
+ case HW_QSL_RCP_TBL_LO:
+ GET_SET(be->qsl.v7.rcp[index].tbl_lo, value);
+ break;
+
+ case HW_QSL_RCP_TBL_HI:
+ GET_SET(be->qsl.v7.rcp[index].tbl_hi, value);
+ break;
+
+ case HW_QSL_RCP_TBL_IDX:
+ GET_SET(be->qsl.v7.rcp[index].tbl_idx, value);
+ break;
+
+ case HW_QSL_RCP_TBL_MSK:
+ GET_SET(be->qsl.v7.rcp[index].tbl_msk, value);
+ break;
+
+ case HW_QSL_RCP_LR:
+ GET_SET(be->qsl.v7.rcp[index].lr, value);
+ break;
+
+ case HW_QSL_RCP_TSA:
+ GET_SET(be->qsl.v7.rcp[index].tsa, value);
+ break;
+
+ case HW_QSL_RCP_VLI:
+ GET_SET(be->qsl.v7.rcp[index].vli, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -117,6 +225,73 @@ int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qst_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qst_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_qst_entries) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.qst[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_qst_s));
+ break;
+
+ case HW_QSL_QST_QUEUE:
+ GET_SET(be->qsl.v7.qst[index].queue, value);
+ break;
+
+ case HW_QSL_QST_EN:
+ GET_SET(be->qsl.v7.qst[index].en, value);
+ break;
+
+ case HW_QSL_QST_TX_PORT:
+ GET_SET(be->qsl.v7.qst[index].tx_port, value);
+ break;
+
+ case HW_QSL_QST_LRE:
+ GET_SET(be->qsl.v7.qst[index].lre, value);
+ break;
+
+ case HW_QSL_QST_TCI:
+ GET_SET(be->qsl.v7.qst[index].tci, value);
+ break;
+
+ case HW_QSL_QST_VEN:
+ GET_SET(be->qsl.v7.qst[index].ven, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -130,6 +305,49 @@ int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qen_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qen_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= QSL_QEN_ENTRIES) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QEN_EN:
+ GET_SET(be->qsl.v7.qen[index].en, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, &value, 0);
+}
+
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, value, 1);
+}
+
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b17bce3745..5572662647 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,12 +20,18 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_qsl {
+ struct hw_db_inline_qsl_data data;
+ int qst_idx;
+ } *qsl;
+
struct hw_db_inline_resource_db_slc_lr {
struct hw_db_inline_slc_lr_data data;
int ref;
} *slc_lr;
uint32_t nb_cot;
+ uint32_t nb_qsl;
uint32_t nb_slc_lr;
/* Items */
@@ -61,6 +67,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_qsl = ndev->be.qsl.nb_rcp_categories;
+ db->qsl = calloc(db->nb_qsl, sizeof(struct hw_db_inline_resource_db_qsl));
+
+ if (db->qsl == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_slc_lr = ndev->be.max_categories;
db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
@@ -86,6 +100,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->qsl);
free(db->slc_lr);
free(db->cat);
@@ -110,6 +125,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_QSL:
+ hw_db_inline_qsl_deref(ndev, db_handle, *(struct hw_db_qsl_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_SLC_LR:
hw_db_inline_slc_lr_deref(ndev, db_handle,
*(struct hw_db_slc_lr_idx *)&idxs[i]);
@@ -145,6 +164,13 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
+ /* QSL for traffic policing */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_hw_id, 0x3) < 0)
+ return -1;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, qsl_hw_id, 1) < 0)
+ return -1;
+
/* Select and enable QSL recipe */
if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
return -1;
@@ -255,6 +281,175 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* QSL */
+/******************************************************************************/
+
+/* Calculate queue mask for QSL TBL_MSK for given number of queues.
+ * NOTE: If number of queues is not power of two, then queue mask will be created
+ * for nearest smaller power of two.
+ */
+static uint32_t queue_mask(uint32_t nr_queues)
+{
+ nr_queues |= nr_queues >> 1;
+ nr_queues |= nr_queues >> 2;
+ nr_queues |= nr_queues >> 4;
+ nr_queues |= nr_queues >> 8;
+ nr_queues |= nr_queues >> 16;
+ return nr_queues >> 1;
+}
+
+static int hw_db_inline_qsl_compare(const struct hw_db_inline_qsl_data *data1,
+ const struct hw_db_inline_qsl_data *data2)
+{
+ if (data1->discard != data2->discard || data1->drop != data2->drop ||
+ data1->table_size != data2->table_size || data1->retransmit != data2->retransmit) {
+ return 0;
+ }
+
+ for (int i = 0; i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ if (data1->table[i].queue != data2->table[i].queue ||
+ data1->table[i].queue_en != data2->table[i].queue_en ||
+ data1->table[i].tx_port != data2->table[i].tx_port ||
+ data1->table[i].tx_port_en != data2->table[i].tx_port_en) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_qsl_idx qsl_idx = { .raw = 0 };
+ uint32_t qst_idx = 0;
+ int res;
+
+ qsl_idx.type = HW_DB_IDX_TYPE_QSL;
+
+ if (data->discard) {
+ qsl_idx.ids = 0;
+ return qsl_idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_qsl; ++i) {
+ if (hw_db_inline_qsl_compare(data, &db->qsl[i].data)) {
+ qsl_idx.ids = i;
+ hw_db_inline_qsl_ref(ndev, db, qsl_idx);
+ return qsl_idx;
+ }
+ }
+
+ res = flow_nic_alloc_resource(ndev, RES_QSL_RCP, 1);
+
+ if (res < 0) {
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qsl_idx.ids = res & 0xff;
+
+ if (data->table_size > 0) {
+ res = flow_nic_alloc_resource_config(ndev, RES_QSL_QST, data->table_size, 1);
+
+ if (res < 0) {
+ flow_nic_deref_resource(ndev, RES_QSL_RCP, qsl_idx.ids);
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qst_idx = (uint32_t)res;
+ }
+
+ memcpy(&db->qsl[qsl_idx.ids].data, data, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[qsl_idx.ids].qst_idx = qst_idx;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, qsl_idx.ids, 0x0);
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, qsl_idx.ids, data->discard);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_idx.ids, data->drop * 0x3);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_LR, qsl_idx.ids, data->retransmit * 0x3);
+
+ if (data->table_size == 0) {
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, 0x0);
+
+ } else {
+ const uint32_t table_start = qst_idx;
+ const uint32_t table_end = table_start + data->table_size - 1;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, table_end);
+
+ /* Toeplitz hash function uses TBL_IDX and TBL_MSK. */
+ uint32_t msk = queue_mask(table_end - table_start + 1);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, msk);
+
+ for (uint32_t i = 0; i < data->table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, table_start + i, 0x0);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_TX_PORT, table_start + i,
+ data->table[i].tx_port);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_LRE, table_start + i,
+ data->table[i].tx_port_en);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_QUEUE, table_start + i,
+ data->table[i].queue);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_EN, table_start + i,
+ data->table[i].queue_en);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, data->table_size);
+ }
+
+ hw_mod_qsl_rcp_flush(&ndev->be, qsl_idx.ids, 1);
+
+ return qsl_idx;
+}
+
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ (void)db_handle;
+
+ if (!idx.error && idx.ids != 0)
+ flow_nic_ref_resource(ndev, RES_QSL_RCP, idx.ids);
+}
+
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error || idx.ids == 0)
+ return;
+
+ if (flow_nic_deref_resource(ndev, RES_QSL_RCP, idx.ids) == 0) {
+ const int table_size = (int)db->qsl[idx.ids].data.table_size;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_qsl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ if (table_size > 0) {
+ const int table_start = db->qsl[idx.ids].qst_idx;
+
+ for (int i = 0; i < (int)table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL,
+ table_start + i, 0x0);
+ flow_nic_free_resource(ndev, RES_QSL_QST, table_start + i);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, table_size);
+ }
+
+ memset(&db->qsl[idx.ids].data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[idx.ids].qst_idx = 0;
+ }
+}
+
/******************************************************************************/
/* SLC_LR */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index ef63336b1c..d0435acaef 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,6 +36,10 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_qsl_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cat_idx {
HW_DB_IDX;
};
@@ -48,6 +52,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
};
@@ -113,6 +118,7 @@ struct hw_db_inline_action_set_data {
int jump;
struct {
struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
};
};
};
@@ -131,6 +137,11 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data);
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+
struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_slc_lr_data *data);
void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1eab579142..6d72f8d99b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2277,9 +2277,55 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
+
+static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_data *qsl_data,
+ uint32_t num_dest_port, uint32_t num_queues)
+{
+ memset(qsl_data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+
+ if (fd->dst_num_avail <= 0) {
+ qsl_data->drop = 1;
+
+ } else {
+ assert(fd->dst_num_avail < HW_DB_INLINE_MAX_QST_PER_QSL);
+
+ uint32_t ports[fd->dst_num_avail];
+ uint32_t queues[fd->dst_num_avail];
+
+ uint32_t port_index = 0;
+ uint32_t queue_index = 0;
+ uint32_t max = num_dest_port > num_queues ? num_dest_port : num_queues;
+
+ memset(ports, 0, fd->dst_num_avail);
+ memset(queues, 0, fd->dst_num_avail);
+
+ qsl_data->table_size = max;
+ qsl_data->retransmit = num_dest_port > 0 ? 1 : 0;
+
+ for (int i = 0; i < fd->dst_num_avail; ++i)
+ if (fd->dst_id[i].type == PORT_PHY)
+ ports[port_index++] = fd->dst_id[i].id;
+
+ else if (fd->dst_id[i].type == PORT_VIRT)
+ queues[queue_index++] = fd->dst_id[i].id;
+
+ for (uint32_t i = 0; i < max; ++i) {
+ if (num_dest_port > 0) {
+ qsl_data->table[i].tx_port = ports[i % num_dest_port];
+ qsl_data->table[i].tx_port_en = 1;
+ }
+
+ if (num_queues > 0) {
+ qsl_data->table[i].queue = queues[i % num_queues];
+ qsl_data->table[i].queue_en = 1;
+ }
+ }
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
- const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
uint32_t local_idxs[],
@@ -2289,6 +2335,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
+ local_idxs[(*local_idx_counter)++] = qsl_idx.raw;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2329,6 +2386,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
fh->caller_id = caller_id;
struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
@@ -2399,6 +2457,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle,
+ &qsl_data);
+ fh->db_idxs[fh->db_idx_counter++] = qsl_idx.raw;
+ action_set_data.qsl = qsl_idx;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2470,6 +2541,24 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* Initialize QSL with unmatched recipe index 0 - discard */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, 0, 0x1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, 0);
+
+ /* Initialize QST with default index 0 */
+ if (hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, 0, 0x0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_qst_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
+
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
@@ -2488,6 +2577,7 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
NT_FLM_VIOLATING_MBR_FLOW_TYPE,
@@ -2534,6 +2624,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, 0, 0);
+ hw_mod_qsl_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_QSL_RCP, 0);
+
hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 30/73] net/ntnic: add KM module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (28 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 29/73] net/ntnic: add QSL module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 31/73] net/ntnic: add hash API Serhii Iliushyk
` (43 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Key Matcher module checks the values of individual fields of a packet.
It supports both exact match which is implemented with a CAM,
and wildcards which is implemented with a TCAM.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 110 +-
drivers/net/ntnic/include/hw_mod_backend.h | 64 +-
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1065 +++++++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++
.../profile_inline/flow_api_hw_db_inline.h | 38 +
.../profile_inline/flow_api_profile_inline.c | 162 +++
7 files changed, 2024 insertions(+), 29 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b1d39b919b..a0f02f4e8a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -52,34 +52,32 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_WORD_NUM 24
+#define MAX_BANKS 6
+
+#define MAX_TCAM_START_OFFSETS 4
+
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
/*
- * Tunnel encapsulation header definition
+ * 128 128 32 32 32
+ * Have | QW0 || QW4 || SW8 || SW9 | SWX in FPGA
+ *
+ * Each word may start at any offset, though
+ * they are combined in chronological order, with all enabled to
+ * build the extracted match data, thus that is how the match key
+ * must be build
*/
-#define MAX_TUN_HDR_SIZE 128
-struct tunnel_header_s {
- union {
- uint8_t hdr8[MAX_TUN_HDR_SIZE];
- uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
- } d;
- uint32_t user_port_id;
- uint8_t len;
-
- uint8_t nb_vlans;
-
- uint8_t ip_version; /* 4: v4, 6: v6 */
- uint16_t ip_csum_precalc;
-
- uint8_t new_outer;
- uint8_t l2_len;
- uint8_t l3_len;
- uint8_t l4_len;
+enum extractor_e {
+ KM_USE_EXTRACTOR_UNDEF,
+ KM_USE_EXTRACTOR_QWORD,
+ KM_USE_EXTRACTOR_SWORD,
};
struct match_elem_s {
+ enum extractor_e extr;
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
uint32_t e_mask[4];
@@ -89,16 +87,76 @@ struct match_elem_s {
uint32_t word_len;
};
+enum cam_tech_use_e {
+ KM_CAM,
+ KM_TCAM,
+ KM_SYNERGY
+};
+
struct km_flow_def_s {
struct flow_api_backend_s *be;
+ /* For keeping track of identical entries */
+ struct km_flow_def_s *reference;
+ struct km_flow_def_s *root;
+
/* For collect flow elements and sorting */
struct match_elem_s match[MAX_MATCH_FIELDS];
+ struct match_elem_s *match_map[MAX_MATCH_FIELDS];
int num_ftype_elem;
+ /* Finally formatted CAM/TCAM entry */
+ enum cam_tech_use_e target;
+ uint32_t entry_word[MAX_WORD_NUM];
+ uint32_t entry_mask[MAX_WORD_NUM];
+ int key_word_size;
+
+ /* TCAM calculated possible bank start offsets */
+ int start_offsets[MAX_TCAM_START_OFFSETS];
+ int num_start_offsets;
+
/* Flow information */
/* HW input port ID needed for compare. In port must be identical on flow types */
uint32_t port_id;
+ uint32_t info; /* used for color (actions) */
+ int info_set;
+ int flow_type; /* 0 is illegal and used as unset */
+ int flushed_to_target; /* if this km entry has been finally programmed into NIC hw */
+
+ /* CAM specific bank management */
+ int cam_paired;
+ int record_indexes[MAX_BANKS];
+ int bank_used;
+ uint32_t *cuckoo_moves; /* for CAM statistics only */
+ struct cam_distrib_s *cam_dist;
+
+ /* TCAM specific bank management */
+ struct tcam_distrib_s *tcam_dist;
+ int tcam_start_bank;
+ int tcam_record;
+};
+
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
};
enum flow_port_type_e {
@@ -247,11 +305,25 @@ struct flow_handle {
};
};
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
uint32_t word_len, enum frame_offs_e start, int8_t offset);
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id);
+/*
+ * Compares 2 KM key definitions after first collect validate and optimization.
+ * km is compared against an existing km1.
+ * if identical, km1 flow_type is returned
+ */
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1);
+
+int km_rcp_set(struct km_flow_def_s *km, int index);
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color);
+int km_clear_data_match_entry(struct km_flow_def_s *km);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6fa2a3d94f..26903f2183 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -132,6 +132,22 @@ static inline int is_non_zero(const void *addr, size_t n)
return 0;
}
+/* Sideband info bit indicator */
+#define SWX_INFO (1 << 6)
+
+enum km_flm_if_select_e {
+ KM_FLM_IF_FIRST = 0,
+ KM_FLM_IF_SECOND = 1
+};
+
+#define FIELD_START_INDEX 100
+
+#define COMMON_FUNC_INFO_S \
+ int ver; \
+ void *base; \
+ unsigned int alloced_size; \
+ int debug
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
@@ -141,22 +157,39 @@ enum frame_offs_e {
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ SB_VNI = SWX_INFO | 1,
+ SB_MAC_PORT = SWX_INFO | 2,
+ SB_KCC_ID = SWX_INFO | 3
};
-/* Sideband info bit indicator */
+enum {
+ QW0_SEL_EXCLUDE = 0,
+ QW0_SEL_FIRST32 = 1,
+ QW0_SEL_FIRST64 = 3,
+ QW0_SEL_ALL128 = 4,
+};
-enum km_flm_if_select_e {
- KM_FLM_IF_FIRST = 0,
- KM_FLM_IF_SECOND = 1
+enum {
+ QW4_SEL_EXCLUDE = 0,
+ QW4_SEL_FIRST32 = 1,
+ QW4_SEL_FIRST64 = 2,
+ QW4_SEL_ALL128 = 3,
};
-#define FIELD_START_INDEX 100
+enum {
+ DW8_SEL_EXCLUDE = 0,
+ DW8_SEL_FIRST32 = 3,
+};
-#define COMMON_FUNC_INFO_S \
- int ver; \
- void *base; \
- unsigned int alloced_size; \
- int debug
+enum {
+ DW10_SEL_EXCLUDE = 0,
+ DW10_SEL_FIRST32 = 2,
+};
+
+enum {
+ SWX_SEL_EXCLUDE = 0,
+ SWX_SEL_ALL32 = 1,
+};
enum {
PROT_OTHER = 0,
@@ -440,13 +473,24 @@ int hw_mod_km_alloc(struct flow_api_backend_s *be);
void hw_mod_km_free(struct flow_api_backend_s *be);
int hw_mod_km_reset(struct flow_api_backend_s *be);
int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value);
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value);
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count);
int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
int byte_val, uint32_t *value_set);
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set);
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 237e9f7b4e..30d6ea728e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -10,8 +10,34 @@
#include "flow_api_engine.h"
#include "nt_util.h"
+#define MAX_QWORDS 2
+#define MAX_SWORDS 2
+
+#define CUCKOO_MOVE_MAX_DEPTH 8
+
#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+#define CAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_cam_records + (rec))
+#define CAM_KM_DIST_IDX(bnk) \
+ ({ \
+ int _temp_bnk = (bnk); \
+ CAM_DIST_IDX(_temp_bnk, km->record_indexes[_temp_bnk]); \
+ })
+
+#define TCAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_tcam_bank_width + (rec))
+
+#define CAM_ENTRIES \
+ (km->be->km.nb_cam_banks * km->be->km.nb_cam_records * sizeof(struct cam_distrib_s))
+#define TCAM_ENTRIES \
+ (km->be->km.nb_tcam_bank_width * km->be->km.nb_tcam_banks * sizeof(struct tcam_distrib_s))
+
+/*
+ * CAM structures and defines
+ */
+struct cam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
static const struct cam_match_masks_s {
uint32_t word_len;
uint32_t key_mask[4];
@@ -36,6 +62,25 @@ static const struct cam_match_masks_s {
{ 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
};
+static int cam_addr_reserved_stack[CUCKOO_MOVE_MAX_DEPTH];
+
+/*
+ * TCAM structures and defines
+ */
+struct tcam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
+static int tcam_find_mapping(struct km_flow_def_s *km);
+
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
+{
+ km->cam_dist = (struct cam_distrib_s *)*handle;
+ km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
+ km->tcam_dist =
+ (struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+}
+
void km_free_ndev_resource_management(void **handle)
{
if (*handle) {
@@ -98,3 +143,1023 @@ int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_m
km->num_ftype_elem++;
return 0;
}
+
+static int get_word(struct km_flow_def_s *km, uint32_t size, int marked[])
+{
+ for (int i = 0; i < km->num_ftype_elem; i++)
+ if (!marked[i] && !(km->match[i].extr_start_offs_id & SWX_INFO) &&
+ km->match[i].word_len == size)
+ return i;
+
+ return -1;
+}
+
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id)
+{
+ /*
+ * Create combined extractor mappings
+ * if key fields may be changed to cover un-mappable otherwise?
+ * split into cam and tcam and use synergy mode when available
+ */
+ int match_marked[MAX_MATCH_FIELDS];
+ int idx = 0;
+ int next = 0;
+ int m_idx;
+ int size;
+
+ memset(match_marked, 0, sizeof(match_marked));
+
+ /* build QWords */
+ for (int qwords = 0; qwords < MAX_QWORDS; qwords++) {
+ size = 4;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 2;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 1;
+ m_idx = get_word(km, 1, match_marked);
+ }
+ }
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_QWORD;
+
+ /* build final entry words and mask array */
+ for (int i = 0; i < size; i++) {
+ km->entry_word[idx + i] = km->match[m_idx].e_word[i];
+ km->entry_mask[idx + i] = km->match[m_idx].e_mask[i];
+ }
+
+ idx += size;
+ next++;
+ }
+
+ m_idx = get_word(km, 4, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more QWords */
+ return -1;
+ }
+
+ /*
+ * On km v6+ we have DWORDs here instead. However, we only use them as SWORDs for now
+ * No match would be able to exploit these as DWORDs because of maximum length of 12 words
+ * in CAM The last 2 words are taken by KCC-ID/SWX and Color. You could have one or none
+ * QWORDs where then both these DWORDs were possible in 10 words, but we don't have such
+ * use case built in yet
+ */
+ /* build SWords */
+ for (int swords = 0; swords < MAX_SWORDS; swords++) {
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_SWORD;
+
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[m_idx].e_word[0];
+ km->entry_mask[idx] = km->match[m_idx].e_mask[0];
+ idx++;
+ next++;
+ }
+
+ /*
+ * Make sure we took them all
+ */
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more SWords */
+ return -1;
+ }
+
+ /*
+ * Handle SWX words specially
+ */
+ int swx_found = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id & SWX_INFO) {
+ km->match_map[next] = &km->match[i];
+ km->match[i].extr = KM_USE_EXTRACTOR_SWORD;
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[i].e_word[0];
+ km->entry_mask[idx] = km->match[i].e_mask[0];
+ idx++;
+ next++;
+ swx_found = 1;
+ }
+ }
+
+ assert(next == km->num_ftype_elem);
+
+ km->key_word_size = idx;
+ km->port_id = port_id;
+
+ km->target = KM_CAM;
+
+ /*
+ * Finally decide if we want to put this match->action into the TCAM
+ * When SWX word used we need to put it into CAM always, no matter what mask pattern
+ * Later, when synergy mode is applied, we can do a split
+ */
+ if (!swx_found && km->key_word_size <= 6) {
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match_map[i]->masked_for_tcam) {
+ /* At least one */
+ km->target = KM_TCAM;
+ }
+ }
+ }
+
+ NT_LOG(DBG, FILTER, "This flow goes into %s", (km->target == KM_TCAM) ? "TCAM" : "CAM");
+
+ if (km->target == KM_TCAM) {
+ if (km->key_word_size > 10) {
+ /* do not support SWX in TCAM */
+ return -1;
+ }
+
+ /*
+ * adjust for unsupported key word size in TCAM
+ */
+ if ((km->key_word_size == 5 || km->key_word_size == 7 || km->key_word_size == 9)) {
+ km->entry_mask[km->key_word_size] = 0;
+ km->key_word_size++;
+ }
+
+ /*
+ * 1. the fact that the length of a key cannot change among the same used banks
+ *
+ * calculate possible start indexes
+ * unfortunately restrictions in TCAM lookup
+ * makes it hard to handle key lengths larger than 6
+ * when other sizes should be possible too
+ */
+ switch (km->key_word_size) {
+ case 1:
+ for (int i = 0; i < 4; i++)
+ km->start_offsets[0] = 8 + i;
+
+ km->num_start_offsets = 4;
+ break;
+
+ case 2:
+ km->start_offsets[0] = 6;
+ km->num_start_offsets = 1;
+ break;
+
+ case 3:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 4:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 6:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Final Key word size too large: %i",
+ km->key_word_size);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1)
+{
+ if (km->target != km1->target || km->num_ftype_elem != km1->num_ftype_elem ||
+ km->key_word_size != km1->key_word_size || km->info_set != km1->info_set)
+ return 0;
+
+ /*
+ * before KCC-CAM:
+ * if port is added to match, then we can have different ports in CAT
+ * that reuses this flow type
+ */
+ int port_match_included = 0, kcc_swx_used = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id == SB_MAC_PORT) {
+ port_match_included = 1;
+ break;
+ }
+
+ if (km->match_map[i]->extr_start_offs_id == SB_KCC_ID) {
+ kcc_swx_used = 1;
+ break;
+ }
+ }
+
+ /*
+ * If not using KCC and if port match is not included in CAM,
+ * we need to have same port_id to reuse
+ */
+ if (!kcc_swx_used && !port_match_included && km->port_id != km1->port_id)
+ return 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ /* using same extractor types in same sequence */
+ if (km->match_map[i]->extr_start_offs_id !=
+ km1->match_map[i]->extr_start_offs_id ||
+ km->match_map[i]->rel_offs != km1->match_map[i]->rel_offs ||
+ km->match_map[i]->extr != km1->match_map[i]->extr ||
+ km->match_map[i]->word_len != km1->match_map[i]->word_len) {
+ return 0;
+ }
+ }
+
+ if (km->target == KM_CAM) {
+ /* in CAM must exactly match on all masks */
+ for (int i = 0; i < km->key_word_size; i++)
+ if (km->entry_mask[i] != km1->entry_mask[i])
+ return 0;
+
+ /* Would be set later if not reusing from km1 */
+ km->cam_paired = km1->cam_paired;
+
+ } else if (km->target == KM_TCAM) {
+ /*
+ * If TCAM, we must make sure Recipe Key Mask does not
+ * mask out enable bits in masks
+ * Note: it is important that km1 is the original creator
+ * of the KM Recipe, since it contains its true masks
+ */
+ for (int i = 0; i < km->key_word_size; i++)
+ if ((km->entry_mask[i] & km1->entry_mask[i]) != km->entry_mask[i])
+ return 0;
+
+ km->tcam_start_bank = km1->tcam_start_bank;
+ km->tcam_record = -1; /* needs to be found later */
+
+ } else {
+ NT_LOG(DBG, FILTER, "ERROR - KM target not defined or supported");
+ return 0;
+ }
+
+ /*
+ * Check for a flow clash. If already programmed return with -1
+ */
+ int double_match = 1;
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ if ((km->entry_word[i] & km->entry_mask[i]) !=
+ (km1->entry_word[i] & km1->entry_mask[i])) {
+ double_match = 0;
+ break;
+ }
+ }
+
+ if (double_match)
+ return -1;
+
+ /*
+ * Note that TCAM and CAM may reuse same RCP and flow type
+ * when this happens, CAM entry wins on overlap
+ */
+
+ /* Use same KM Recipe and same flow type - return flow type */
+ return km1->flow_type;
+}
+
+int km_rcp_set(struct km_flow_def_s *km, int index)
+{
+ int qw = 0;
+ int sw = 0;
+ int swx = 0;
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PRESET_ALL, index, 0, 0);
+
+ /* set extractor words, offs, contrib */
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ switch (km->match_map[i]->extr) {
+ case KM_USE_EXTRACTOR_SWORD:
+ if (km->match_map[i]->extr_start_offs_id & SWX_INFO) {
+ if (km->target == KM_CAM && swx == 0) {
+ /* SWX */
+ if (km->match_map[i]->extr_start_offs_id == SB_VNI) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - VNI");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_MAC_PORT) {
+ NT_LOG(DBG, FILTER,
+ "Set KM SWX sel A - PTC + MAC");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_KCC_ID) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - KCC ID");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ return -1;
+ }
+
+ swx++;
+
+ } else {
+ if (sw == 0) {
+ /* DW8 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_SEL_A, index, 0,
+ DW8_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW8 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else if (sw == 1) {
+ /* DW10 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_SEL_A, index, 0,
+ DW10_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW10 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else {
+ return -1;
+ }
+
+ sw++;
+ }
+
+ break;
+
+ case KM_USE_EXTRACTOR_QWORD:
+ if (qw == 0) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW0 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else if (qw == 1) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW4 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else {
+ return -1;
+ }
+
+ qw++;
+ break;
+
+ default:
+ return -1;
+ }
+ }
+
+ /* set mask A */
+ for (int i = 0; i < km->key_word_size; i++) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_MASK_A, index,
+ (km->be->km.nb_km_rcp_mask_a_word_size - 1) - i,
+ km->entry_mask[i]);
+ NT_LOG(DBG, FILTER, "Set KM mask A: %08x", km->entry_mask[i]);
+ }
+
+ if (km->target == KM_CAM) {
+ /* set info - Color */
+ if (km->info_set) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_INFO_A, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM info A");
+ }
+
+ /* set key length A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_EL_A, index, 0,
+ km->key_word_size + !!km->info_set - 1); /* select id is -1 */
+ /* set Flow Type for Key A */
+ NT_LOG(DBG, FILTER, "Set KM EL A: %i", km->key_word_size + !!km->info_set - 1);
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_FTM_A, index, 0, 1 << km->flow_type);
+
+ NT_LOG(DBG, FILTER, "Set KM FTM A - ft: %i", km->flow_type);
+
+ /* Set Paired - only on the CAM part though... TODO split CAM and TCAM */
+ if ((uint32_t)(km->key_word_size + !!km->info_set) >
+ km->be->km.nb_cam_record_words) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PAIRED, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM CAM Paired");
+ km->cam_paired = 1;
+ }
+
+ } else if (km->target == KM_TCAM) {
+ uint32_t bank_bm = 0;
+
+ if (tcam_find_mapping(km) < 0) {
+ /* failed mapping into TCAM */
+ NT_LOG(DBG, FILTER, "INFO: TCAM mapping flow failed");
+ return -1;
+ }
+
+ assert((uint32_t)(km->tcam_start_bank + km->key_word_size) <=
+ km->be->km.nb_tcam_banks);
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ bank_bm |=
+ (1 << (km->be->km.nb_tcam_banks - 1 - (km->tcam_start_bank + i)));
+ }
+
+ /* Set BANK_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_BANK_A, index, 0, bank_bm);
+ /* Set Kl_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_KL_A, index, 0, km->key_word_size - 1);
+
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cam_populate(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ km->entry_word[i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = km;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1,
+ km->entry_word[km->be->km.nb_cam_record_words + i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = km;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+
+ return res;
+}
+
+static int cam_reset_entry(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = NULL;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = NULL;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+ return res;
+}
+
+static int move_cuckoo_index(struct km_flow_def_s *km)
+{
+ assert(km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner);
+
+ for (uint32_t bank = 0; bank < km->be->km.nb_cam_banks; bank++) {
+ /* It will not select itself */
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner == NULL) {
+ if (km->cam_paired) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner != NULL)
+ continue;
+ }
+
+ /*
+ * Populate in new position
+ */
+ int res = cam_populate(km, bank);
+
+ if (res) {
+ NT_LOG(DBG, FILTER,
+ "Error: failed to write to KM CAM in cuckoo move");
+ return 0;
+ }
+
+ /*
+ * Reset/free entry in old bank
+ * HW flushes are really not needed, the old addresses are always taken
+ * over by the caller If you change this code in future updates, this may
+ * no longer be true then!
+ */
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = NULL;
+
+ if (km->cam_paired)
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "KM Cuckoo hash moved from bank %i to bank %i (%04X => %04X)",
+ km->bank_used, bank, CAM_KM_DIST_IDX(km->bank_used),
+ CAM_KM_DIST_IDX(bank));
+ km->bank_used = bank;
+ (*km->cuckoo_moves)++;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx, int levels,
+ int cam_adr_list_len)
+{
+ struct km_flow_def_s *km = km_parent->cam_dist[bank_idx].km_owner;
+
+ assert(levels <= CUCKOO_MOVE_MAX_DEPTH);
+
+ /*
+ * Only move if same pairness
+ * Can be extended later to handle both move of paired and single entries
+ */
+ if (!km || km_parent->cam_paired != km->cam_paired)
+ return 0;
+
+ if (move_cuckoo_index(km))
+ return 1;
+
+ if (levels <= 1)
+ return 0;
+
+ assert(cam_adr_list_len < CUCKOO_MOVE_MAX_DEPTH);
+
+ cam_addr_reserved_stack[cam_adr_list_len++] = bank_idx;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ int reserved = 0;
+ int new_idx = CAM_KM_DIST_IDX(i);
+
+ for (int i_reserved = 0; i_reserved < cam_adr_list_len; i_reserved++) {
+ if (cam_addr_reserved_stack[i_reserved] == new_idx) {
+ reserved = 1;
+ break;
+ }
+ }
+
+ if (reserved)
+ continue;
+
+ int res = move_cuckoo_index_level(km, new_idx, levels - 1, cam_adr_list_len);
+
+ if (res) {
+ if (move_cuckoo_index(km))
+ return 1;
+
+ assert(0);
+ }
+ }
+
+ return 0;
+}
+
+static int km_write_data_to_cam(struct km_flow_def_s *km)
+{
+ int res = 0;
+ assert(km->be->km.nb_cam_banks <= MAX_BANKS);
+ assert(km->cam_dist);
+
+ NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
+ km->record_indexes[1], km->record_indexes[2]);
+
+ if (km->info_set)
+ km->entry_word[km->key_word_size] = km->info; /* finally set info */
+
+ int bank = -1;
+
+ /*
+ * first step, see if any of the banks are free
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(i_bank)].km_owner == NULL) {
+ if (km->cam_paired == 0 ||
+ km->cam_dist[CAM_KM_DIST_IDX(i_bank) + 1].km_owner == NULL) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0) {
+ /*
+ * Second step - cuckoo move existing flows if possible
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (move_cuckoo_index_level(km, CAM_KM_DIST_IDX(i_bank), 4, 0)) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0)
+ return -1;
+
+ /* populate CAM */
+ NT_LOG(DBG, FILTER, "KM Bank = %i (addr %04X)", bank, CAM_KM_DIST_IDX(bank));
+ res = cam_populate(km, bank);
+
+ if (res == 0) {
+ km->flushed_to_target = 1;
+ km->bank_used = bank;
+ }
+
+ return res;
+}
+
+/*
+ * TCAM
+ */
+static int tcam_find_free_record(struct km_flow_def_s *km, int start_bank)
+{
+ for (uint32_t rec = 0; rec < km->be->km.nb_tcam_bank_width; rec++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank, rec)].km_owner == NULL) {
+ int pass = 1;
+
+ for (int ii = 1; ii < km->key_word_size; ii++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank + ii, rec)].km_owner !=
+ NULL) {
+ pass = 0;
+ break;
+ }
+ }
+
+ if (pass) {
+ km->tcam_record = rec;
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int tcam_find_mapping(struct km_flow_def_s *km)
+{
+ /* Search record and start index for this flow */
+ for (int bs_idx = 0; bs_idx < km->num_start_offsets; bs_idx++) {
+ if (tcam_find_free_record(km, km->start_offsets[bs_idx])) {
+ km->tcam_start_bank = km->start_offsets[bs_idx];
+ NT_LOG(DBG, FILTER, "Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static int tcam_write_word(struct km_flow_def_s *km, int bank, int record, uint32_t word,
+ uint32_t mask)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ uint8_t a = (uint8_t)((word >> (24 - (byte * 8))) & 0xff);
+ uint8_t a_m = (uint8_t)((mask >> (24 - (byte * 8))) & 0xff);
+ /* calculate important value bits */
+ a = a & a_m;
+
+ for (int val = 0; val < 256; val++) {
+ err |= hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if ((val & a_m) == a)
+ all_recs[rec_val] |= rec_bit;
+ else
+ all_recs[rec_val] &= ~rec_bit;
+
+ err |= hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ /* flush bank */
+ err |= hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+
+ if (err == 0) {
+ assert(km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner == NULL);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = km;
+ }
+
+ return err;
+}
+
+static int km_write_data_to_tcam(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_record < 0) {
+ tcam_find_free_record(km, km->tcam_start_bank);
+
+ if (km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER, "Reused RCP: Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ }
+
+ /* Write KM_TCI */
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record,
+ km->info);
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record,
+ km->flow_type);
+ err |= hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++) {
+ err = tcam_write_word(km, km->tcam_start_bank + i, km->tcam_record,
+ km->entry_word[i], km->entry_mask[i]);
+ }
+
+ if (err == 0)
+ km->flushed_to_target = 1;
+
+ return err;
+}
+
+static int tcam_reset_bank(struct km_flow_def_s *km, int bank, int record)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ for (int val = 0; val < 256; val++) {
+ err = hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+
+ all_recs[rec_val] &= ~rec_bit;
+ err = hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ if (err)
+ return err;
+
+ /* flush bank */
+ err = hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER, "Reset TCAM bank %i, rec_val %i rec bit %08x", bank, rec_val,
+ rec_bit);
+
+ return err;
+}
+
+static int tcam_reset_entry(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_start_bank < 0 || km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ /* Write KM_TCI */
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++)
+ err = tcam_reset_bank(km, km->tcam_start_bank + i, km->tcam_record);
+
+ return err;
+}
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color)
+{
+ int res = -1;
+
+ km->info = color;
+ NT_LOG(DBG, FILTER, "Write Data entry Color: %08x", color);
+
+ switch (km->target) {
+ case KM_CAM:
+ res = km_write_data_to_cam(km);
+ break;
+
+ case KM_TCAM:
+ res = km_write_data_to_tcam(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ break;
+ }
+
+ return res;
+}
+
+int km_clear_data_match_entry(struct km_flow_def_s *km)
+{
+ int res = 0;
+
+ if (km->root) {
+ struct km_flow_def_s *km1 = km->root;
+
+ while (km1->reference != km)
+ km1 = km1->reference;
+
+ km1->reference = km->reference;
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->reference) {
+ km->reference->root = NULL;
+
+ switch (km->target) {
+ case KM_CAM:
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = km->reference;
+
+ if (km->key_word_size + !!km->info_set > 1) {
+ assert(km->cam_paired);
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner =
+ km->reference;
+ }
+
+ break;
+
+ case KM_TCAM:
+ for (int i = 0; i < km->key_word_size; i++) {
+ km->tcam_dist[TCAM_DIST_IDX(km->tcam_start_bank + i,
+ km->tcam_record)]
+ .km_owner = km->reference;
+ }
+
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->flushed_to_target) {
+ switch (km->target) {
+ case KM_CAM:
+ res = cam_reset_entry(km, km->bank_used);
+ break;
+
+ case KM_TCAM:
+ res = tcam_reset_entry(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
index 532884ca01..b8a30671c3 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
@@ -165,6 +165,240 @@ int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
return be->iface->km_rcp_flush(be->be_dev, &be->km, start_idx, count);
}
+static int hw_mod_km_rcp_mod(struct flow_api_backend_s *be, enum hw_km_e field, int index,
+ int word_off, uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->km.nb_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.rcp[index], (uint8_t)*value, sizeof(struct km_v7_rcp_s));
+ break;
+
+ case HW_KM_RCP_QW0_DYN:
+ GET_SET(be->km.v7.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_b, value);
+ break;
+
+ case HW_KM_RCP_QW4_DYN:
+ GET_SET(be->km.v7.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW8_DYN:
+ GET_SET(be->km.v7.rcp[index].dw8_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW8_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw8_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW10_DYN:
+ GET_SET(be->km.v7.rcp[index].dw10_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW10_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw10_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_b, value);
+ break;
+
+ case HW_KM_RCP_SWX_CCH:
+ GET_SET(be->km.v7.rcp[index].swx_cch, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_A:
+ GET_SET(be->km.v7.rcp[index].swx_sel_a, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_B:
+ GET_SET(be->km.v7.rcp[index].swx_sel_b, value);
+ break;
+
+ case HW_KM_RCP_MASK_A:
+ if (word_off > KM_RCP_MASK_D_A_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_d_a[word_off], value);
+ break;
+
+ case HW_KM_RCP_MASK_B:
+ if (word_off > KM_RCP_MASK_B_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_b[word_off], value);
+ break;
+
+ case HW_KM_RCP_DUAL:
+ GET_SET(be->km.v7.rcp[index].dual, value);
+ break;
+
+ case HW_KM_RCP_PAIRED:
+ GET_SET(be->km.v7.rcp[index].paired, value);
+ break;
+
+ case HW_KM_RCP_EL_A:
+ GET_SET(be->km.v7.rcp[index].el_a, value);
+ break;
+
+ case HW_KM_RCP_EL_B:
+ GET_SET(be->km.v7.rcp[index].el_b, value);
+ break;
+
+ case HW_KM_RCP_INFO_A:
+ GET_SET(be->km.v7.rcp[index].info_a, value);
+ break;
+
+ case HW_KM_RCP_INFO_B:
+ GET_SET(be->km.v7.rcp[index].info_b, value);
+ break;
+
+ case HW_KM_RCP_FTM_A:
+ GET_SET(be->km.v7.rcp[index].ftm_a, value);
+ break;
+
+ case HW_KM_RCP_FTM_B:
+ GET_SET(be->km.v7.rcp[index].ftm_b, value);
+ break;
+
+ case HW_KM_RCP_BANK_A:
+ GET_SET(be->km.v7.rcp[index].bank_a, value);
+ break;
+
+ case HW_KM_RCP_BANK_B:
+ GET_SET(be->km.v7.rcp[index].bank_b, value);
+ break;
+
+ case HW_KM_RCP_KL_A:
+ GET_SET(be->km.v7.rcp[index].kl_a, value);
+ break;
+
+ case HW_KM_RCP_KL_B:
+ GET_SET(be->km.v7.rcp[index].kl_b, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_A:
+ GET_SET(be->km.v7.rcp[index].keyway_a, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_B:
+ GET_SET(be->km.v7.rcp[index].keyway_b, value);
+ break;
+
+ case HW_KM_RCP_SYNERGY_MODE:
+ GET_SET(be->km.v7.rcp[index].synergy_mode, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw0_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw0_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw2_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw2_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw4_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw4_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw5_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw5_b_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, &value, 0);
+}
+
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, value, 1);
+}
+
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -180,6 +414,103 @@ int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_cam_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_cam_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ if ((unsigned int)bank >= be->km.nb_cam_banks) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ if ((unsigned int)record >= be->km.nb_cam_records) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ unsigned int index = bank * be->km.nb_cam_records + record;
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_CAM_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.cam[index], (uint8_t)*value, sizeof(struct km_v7_cam_s));
+ break;
+
+ case HW_KM_CAM_W0:
+ GET_SET(be->km.v7.cam[index].w0, value);
+ break;
+
+ case HW_KM_CAM_W1:
+ GET_SET(be->km.v7.cam[index].w1, value);
+ break;
+
+ case HW_KM_CAM_W2:
+ GET_SET(be->km.v7.cam[index].w2, value);
+ break;
+
+ case HW_KM_CAM_W3:
+ GET_SET(be->km.v7.cam[index].w3, value);
+ break;
+
+ case HW_KM_CAM_W4:
+ GET_SET(be->km.v7.cam[index].w4, value);
+ break;
+
+ case HW_KM_CAM_W5:
+ GET_SET(be->km.v7.cam[index].w5, value);
+ break;
+
+ case HW_KM_CAM_FT0:
+ GET_SET(be->km.v7.cam[index].ft0, value);
+ break;
+
+ case HW_KM_CAM_FT1:
+ GET_SET(be->km.v7.cam[index].ft1, value);
+ break;
+
+ case HW_KM_CAM_FT2:
+ GET_SET(be->km.v7.cam[index].ft2, value);
+ break;
+
+ case HW_KM_CAM_FT3:
+ GET_SET(be->km.v7.cam[index].ft3, value);
+ break;
+
+ case HW_KM_CAM_FT4:
+ GET_SET(be->km.v7.cam[index].ft4, value);
+ break;
+
+ case HW_KM_CAM_FT5:
+ GET_SET(be->km.v7.cam[index].ft5, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_cam_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count)
{
if (count == ALL_ENTRIES)
@@ -273,6 +604,12 @@ int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int ba
return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 0);
}
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set)
+{
+ return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 1);
+}
+
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -288,6 +625,49 @@ int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_tci_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_tci_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ unsigned int index = bank * be->km.nb_tcam_bank_width + record;
+
+ if (index >= (be->km.nb_tcam_banks * be->km.nb_tcam_bank_width)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_TCI_COLOR:
+ GET_SET(be->km.v7.tci[index].color, value);
+ break;
+
+ case HW_KM_TCI_FT:
+ GET_SET(be->km.v7.tci[index].ft, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_tci_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5572662647..4737460cdf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -40,7 +40,19 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_km_rcp {
+ struct hw_db_inline_km_rcp_data data;
+ int ref;
+
+ struct hw_db_inline_resource_db_km_ft {
+ struct hw_db_inline_km_ft_data data;
+ int ref;
+ } *ft;
+ } *km;
+
uint32_t nb_cat;
+ uint32_t nb_km_ft;
+ uint32_t nb_km_rcp;
/* Hardware */
@@ -91,6 +103,25 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_km_ft = ndev->be.cat.nb_flow_types;
+ db->nb_km_rcp = ndev->be.km.nb_categories;
+ db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
+
+ if (db->km == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ db->km[i].ft = calloc(db->nb_km_ft * db->nb_cat,
+ sizeof(struct hw_db_inline_resource_db_km_ft));
+
+ if (db->km[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
*db_handle = db;
return 0;
}
@@ -104,6 +135,13 @@ void hw_db_inline_destroy(void *db_handle)
free(db->slc_lr);
free(db->cat);
+ if (db->km) {
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
+ free(db->km[i].ft);
+
+ free(db->km);
+ }
+
free(db->cfn);
free(db);
@@ -134,12 +172,61 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_KM_RCP:
+ hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
+ break;
+
default:
break;
}
}
}
+
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type != type)
+ continue;
+
+ switch (type) {
+ case HW_DB_IDX_TYPE_NONE:
+ return NULL;
+
+ case HW_DB_IDX_TYPE_CAT:
+ return &db->cat[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_QSL:
+ return &db->qsl[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_COT:
+ return &db->cot[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_SLC_LR:
+ return &db->slc_lr[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_KM_RCP:
+ return &db->km[idxs[i].id1].data;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
+ default:
+ return NULL;
+ }
+ }
+
+ return NULL;
+}
+
/******************************************************************************/
/* Filter */
/******************************************************************************/
@@ -614,3 +701,150 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cat[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* KM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_km_compare(const struct hw_db_inline_km_rcp_data *data1,
+ const struct hw_db_inline_km_rcp_data *data2)
+{
+ return data1->rcp == data2->rcp;
+}
+
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_km_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_RCP;
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ if (!found && db->km[i].ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (db->km[i].ref > 0 && hw_db_inline_km_compare(data, &db->km[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->km[idx.id1].data, data, sizeof(struct hw_db_inline_km_rcp_data));
+ db->km[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->km[idx.id1].ref += 1;
+}
+
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ (void)db_handle;
+
+ if (idx.error)
+ return;
+}
+
+/******************************************************************************/
+/* KM FT */
+/******************************************************************************/
+
+static int hw_db_inline_km_ft_compare(const struct hw_db_inline_km_ft_data *data1,
+ const struct hw_db_inline_km_ft_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[data->km.id1];
+ struct hw_db_km_ft idx = { .raw = 0 };
+ uint32_t cat_offset = data->cat.ids * db->nb_cat;
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_FT;
+ idx.id2 = data->km.id1;
+ idx.id3 = data->cat.ids;
+
+ if (km_rcp->data.rcp == 0) {
+ idx.id1 = 0;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_km_ft; ++i) {
+ const struct hw_db_inline_resource_db_km_ft *km_ft = &km_rcp->ft[cat_offset + i];
+
+ if (!found && km_ft->ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (km_ft->ref > 0 && hw_db_inline_km_ft_compare(data, &km_ft->data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&km_rcp->ft[cat_offset + idx.id1].data, data,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error) {
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+ db->km[idx.id2].ft[cat_offset + idx.id1].ref += 1;
+ }
+}
+
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[idx.id2];
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+
+ if (idx.error)
+ return;
+
+ km_rcp->ft[cat_offset + idx.id1].ref -= 1;
+
+ if (km_rcp->ft[cat_offset + idx.id1].ref <= 0) {
+ memset(&km_rcp->ft[cat_offset + idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index d0435acaef..e104ba7327 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_action_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cot_idx {
HW_DB_IDX;
};
@@ -48,12 +52,22 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_km_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_km_ft {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_KM_FT,
};
/* Functionality data types */
@@ -123,6 +137,16 @@ struct hw_db_inline_action_set_data {
};
};
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -130,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle);
void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
uint32_t size);
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
@@ -158,6 +184,18 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
/**/
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data);
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data);
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+
+/**/
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6d72f8d99b..beb7fe2cb3 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2335,6 +2335,23 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ const bool empty_pattern = fd_has_empty_pattern(fd);
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
+ local_idxs[(*local_idx_counter)++] = cot_idx.raw;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Finalize QSL */
struct hw_db_qsl_idx qsl_idx =
hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
@@ -2429,6 +2446,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ int identical_km_entry_ft = -1;
+
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -2503,6 +2522,130 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ /* Setup KM RCP */
+ struct hw_db_inline_km_rcp_data km_rcp_data = { .rcp = 0 };
+
+ if (fd->km.num_ftype_elem) {
+ struct flow_handle *flow = dev->ndev->flow_base, *found_flow = NULL;
+
+ if (km_key_create(&fd->km, fh->port_id)) {
+ NT_LOG(ERR, FILTER, "KM creation failed");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.be = &dev->ndev->be;
+
+ /* Look for existing KM RCPs */
+ while (flow) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW &&
+ flow->fd->km.flow_type) {
+ int res = km_key_compare(&fd->km, &flow->fd->km);
+
+ if (res < 0) {
+ /* Flow rcp and match data is identical */
+ identical_km_entry_ft = flow->fd->km.flow_type;
+ found_flow = flow;
+ break;
+ }
+
+ if (res > 0) {
+ /* Flow rcp found and match data is different */
+ found_flow = flow;
+ }
+ }
+
+ flow = flow->next;
+ }
+
+ km_attach_ndev_resource_management(&fd->km, &dev->ndev->km_res_handle);
+
+ if (found_flow != NULL) {
+ /* Reuse existing KM RCP */
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)
+ found_flow->flm_db_idxs,
+ found_flow->flm_db_idx_counter);
+
+ if (other_km_rcp_data == NULL ||
+ flow_nic_ref_resource(dev->ndev, RES_KM_CATEGORY,
+ other_km_rcp_data->rcp)) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference existing KM RCP resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_data.rcp = other_km_rcp_data->rcp;
+ } else {
+ /* Alloc new KM RCP */
+ int rcp = flow_nic_alloc_resource(dev->ndev, RES_KM_CATEGORY, 1);
+
+ if (rcp < 0) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference KM RCP resource (flow_nic_alloc)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_set(&fd->km, rcp);
+ km_rcp_data.rcp = (uint32_t)rcp;
+ }
+ }
+
+ struct hw_db_km_idx km_idx =
+ hw_db_inline_km_add(dev->ndev, dev->ndev->hw_db_handle, &km_rcp_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = km_idx.raw;
+
+ if (km_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM RCP resource (db_inline)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Setup KM FT */
+ struct hw_db_inline_km_ft_data km_ft_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ };
+ struct hw_db_km_ft km_ft_idx =
+ hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = km_ft_idx.raw;
+
+ if (km_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Finalize KM RCP */
+ if (fd->km.num_ftype_elem) {
+ if (identical_km_entry_ft >= 0 && identical_km_entry_ft != km_ft_idx.id1) {
+ NT_LOG(ERR, FILTER,
+ "Identical KM matches cannot have different KM FTs");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.flow_type = km_ft_idx.id1;
+
+ if (fd->km.target == KM_CAM) {
+ uint32_t ft_a_mask = 0;
+ hw_mod_km_rcp_get(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0, &ft_a_mask);
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0,
+ ft_a_mask | (1 << fd->km.flow_type));
+ }
+
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)km_rcp_data.rcp, 1);
+
+ km_write_data_match_entry(&fd->km, 0);
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2783,6 +2926,25 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
} else {
NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->fd->km.num_ftype_elem) {
+ km_clear_data_match_entry(&fh->fd->km);
+
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ if (other_km_rcp_data != NULL &&
+ flow_nic_deref_resource(dev->ndev, RES_KM_CATEGORY,
+ (int)other_km_rcp_data->rcp) == 0) {
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_PRESET_ALL,
+ (int)other_km_rcp_data->rcp, 0, 0);
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)other_km_rcp_data->rcp,
+ 1);
+ }
+ }
+
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 31/73] net/ntnic: add hash API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (29 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 30/73] net/ntnic: add KM module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 32/73] net/ntnic: add TPE module Serhii Iliushyk
` (42 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Hasher module calculates a configurable hash value
to be used internally by the FPGA.
The module support both Toeplitz and NT-hash.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 40 +
drivers/net/ntnic/include/flow_api_engine.h | 17 +
drivers/net/ntnic/include/hw_mod_backend.h | 20 +
.../ntnic/include/stream_binary_flow_api.h | 25 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 212 +++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 ++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 25 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 ++++
.../profile_inline/flow_api_hw_db_inline.c | 142 +++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 850 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 4 +
drivers/net/ntnic/ntnic_mod_reg.h | 4 +
15 files changed, 1706 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index edffd0a57a..2e96fa5bed 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -29,6 +29,37 @@ struct hw_mod_resource_s {
*/
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev);
+/**
+ * A structure used to configure the Receive Side Scaling (RSS) feature
+ * of an Ethernet port.
+ */
+struct nt_eth_rss_conf {
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t rss_key[MAX_RSS_KEY_LEN];
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
+ /**
+ * Hash algorithm.
+ */
+ enum rte_eth_hash_function algorithm;
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask);
+
struct flow_eth_dev {
/* NIC that owns this port device */
struct flow_nic_dev *ndev;
@@ -49,6 +80,11 @@ struct flow_eth_dev {
struct flow_eth_dev *next;
};
+enum flow_nic_hash_e {
+ HASH_ALGO_ROUND_ROBIN = 0,
+ HASH_ALGO_5TUPLE,
+};
+
/* registered NIC backends */
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
@@ -191,4 +227,8 @@ void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm);
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index a0f02f4e8a..e52363f04e 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,7 @@ struct km_flow_def_s {
int bank_used;
uint32_t *cuckoo_moves; /* for CAM statistics only */
struct cam_distrib_s *cam_dist;
+ struct hasher_s *hsh;
/* TCAM specific bank management */
struct tcam_distrib_s *tcam_dist;
@@ -136,6 +137,17 @@ struct km_flow_def_s {
int tcam_record;
};
+/*
+ * RSS configuration, see struct rte_flow_action_rss
+ */
+struct hsh_def_s {
+ enum rte_eth_hash_function func; /* RSS hash function to apply */
+ /* RSS hash types, see definition of RTE_ETH_RSS_* for hash calculation options */
+ uint64_t types;
+ uint32_t key_len; /* Hash key length in bytes. */
+ const uint8_t *key; /* Hash key. */
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -247,6 +259,11 @@ struct nic_flow_def {
* Key Matcher flow definitions
*/
struct km_flow_def_s km;
+
+ /*
+ * Hash module RSS definitions
+ */
+ struct hsh_def_s hsh;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 26903f2183..cee148807a 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -149,14 +149,27 @@ enum km_flm_if_select_e {
int debug
enum frame_offs_e {
+ DYN_SOF = 0,
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
+ DYN_MPLS = 3,
DYN_L3 = 4,
+ DYN_ID_IPV4_6 = 5,
+ DYN_FINAL_IP_DST = 6,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
+ DYN_TUN_PAYLOAD = 9,
+ DYN_TUN_L2 = 10,
+ DYN_TUN_VLAN = 11,
+ DYN_TUN_MPLS = 12,
DYN_TUN_L3 = 13,
+ DYN_TUN_ID_IPV4_6 = 14,
+ DYN_TUN_FINAL_IP_DST = 15,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ DYN_EOF = 18,
+ DYN_L3_PAYLOAD_END = 19,
+ DYN_TUN_L3_PAYLOAD_END = 20,
SB_VNI = SWX_INFO | 1,
SB_MAC_PORT = SWX_INFO | 2,
SB_KCC_ID = SWX_INFO | 3
@@ -227,6 +240,11 @@ enum {
};
+enum {
+ HASH_HASH_NONE = 0,
+ HASH_5TUPLE = 8,
+};
+
enum {
CPY_SELECT_DSCP_IPV4 = 0,
CPY_SELECT_DSCP_IPV6 = 1,
@@ -670,6 +688,8 @@ int hw_mod_hsh_alloc(struct flow_api_backend_s *be);
void hw_mod_hsh_free(struct flow_api_backend_s *be);
int hw_mod_hsh_reset(struct flow_api_backend_s *be);
int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value);
struct qsl_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 8097518d61..e5fe686d99 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,6 +12,31 @@
/* Max RSS hash key length in bytes */
#define MAX_RSS_KEY_LEN 40
+/* NT specific MASKs for RSS configuration */
+/* NOTE: Masks are required for correct RSS configuration, do not modify them! */
+#define NT_ETH_RSS_IPV4_MASK \
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define NT_ETH_RSS_IPV6_MASK \
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NT_ETH_RSS_IP_MASK \
+ (NT_ETH_RSS_IPV4_MASK | NT_ETH_RSS_IPV6_MASK | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+/* List of all RSS flags supported for RSS calculation offload */
+#define NT_ETH_RSS_OFFLOAD_MASK \
+ (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM | RTE_ETH_RSS_PORT | RTE_ETH_RSS_GTPU)
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e1fef37ccb..d7e6d05556 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -56,6 +56,7 @@ sources = files(
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
+ 'nthw/flow_api/flow_hasher.c',
'nthw/flow_api/flow_kcc.c',
'nthw/flow_api/flow_km.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a51d621ef9..043e4244fc 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,8 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "ntlog.h"
+#include "nt_util.h"
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
@@ -12,6 +14,11 @@
#define SCATTER_GATHER
+#define RSS_TO_STRING(name) \
+ { \
+ name, #name \
+ }
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -807,6 +814,211 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
return ndev->be.be_dev;
}
+/* Information for a given RSS type. */
+struct rss_type_info {
+ uint64_t rss_type;
+ const char *str;
+};
+
+static struct rss_type_info rss_to_string[] = {
+ /* RTE_BIT64(2) IPv4 dst + IPv4 src */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4),
+ /* RTE_BIT64(3) IPv4 dst + IPv4 src + Identification of group of fragments */
+ RSS_TO_STRING(RTE_ETH_RSS_FRAG_IPV4),
+ /* RTE_BIT64(4) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_TCP),
+ /* RTE_BIT64(5) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_UDP),
+ /* RTE_BIT64(6) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_SCTP),
+ /* RTE_BIT64(7) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_OTHER),
+ /*
+ * RTE_BIT64(14) 128-bits of L2 payload starting after src MAC, i.e. including optional
+ * VLAN tag and ethertype. Overrides all L3 and L4 flags at the same level, but inner
+ * L2 payload can be combined with outer S-VLAN and GTPU TEID flags.
+ */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_PAYLOAD),
+ /* RTE_BIT64(18) L4 dst + L4 src + L4 protocol - see comment of RTE_ETH_RSS_L4_CHKSUM */
+ RSS_TO_STRING(RTE_ETH_RSS_PORT),
+ /* RTE_BIT64(19) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_VXLAN),
+ /* RTE_BIT64(20) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_GENEVE),
+ /* RTE_BIT64(21) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_NVGRE),
+ /* RTE_BIT64(23) GTP TEID - always from outer GTPU header */
+ RSS_TO_STRING(RTE_ETH_RSS_GTPU),
+ /* RTE_BIT64(24) MAC dst + MAC src */
+ RSS_TO_STRING(RTE_ETH_RSS_ETH),
+ /* RTE_BIT64(25) outermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_S_VLAN),
+ /* RTE_BIT64(26) innermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_C_VLAN),
+ /* RTE_BIT64(27) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ESP),
+ /* RTE_BIT64(28) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_AH),
+ /* RTE_BIT64(29) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV3),
+ /* RTE_BIT64(30) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PFCP),
+ /* RTE_BIT64(31) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PPPOE),
+ /* RTE_BIT64(32) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ECPRI),
+ /* RTE_BIT64(33) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_MPLS),
+ /* RTE_BIT64(34) IPv4 Header checksum + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4_CHKSUM),
+
+ /*
+ * if combined with RTE_ETH_RSS_NONFRAG_IPV4_[TCP|UDP|SCTP] then
+ * L4 protocol + chosen protocol header Checksum
+ * else
+ * error
+ */
+ /* RTE_BIT64(35) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_CHKSUM),
+#ifndef ANDROMEDA_DPDK_21_11
+ /* RTE_BIT64(36) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV2),
+#endif
+
+ { RTE_BIT64(37), "unknown_RTE_BIT64(37)" },
+ { RTE_BIT64(38), "unknown_RTE_BIT64(38)" },
+ { RTE_BIT64(39), "unknown_RTE_BIT64(39)" },
+ { RTE_BIT64(40), "unknown_RTE_BIT64(40)" },
+ { RTE_BIT64(41), "unknown_RTE_BIT64(41)" },
+ { RTE_BIT64(42), "unknown_RTE_BIT64(42)" },
+ { RTE_BIT64(43), "unknown_RTE_BIT64(43)" },
+ { RTE_BIT64(44), "unknown_RTE_BIT64(44)" },
+ { RTE_BIT64(45), "unknown_RTE_BIT64(45)" },
+ { RTE_BIT64(46), "unknown_RTE_BIT64(46)" },
+ { RTE_BIT64(47), "unknown_RTE_BIT64(47)" },
+ { RTE_BIT64(48), "unknown_RTE_BIT64(48)" },
+ { RTE_BIT64(49), "unknown_RTE_BIT64(49)" },
+
+ /* RTE_BIT64(50) outermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_OUTERMOST),
+ /* RTE_BIT64(51) innermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_INNERMOST),
+
+ /* RTE_BIT64(52) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE96),
+ /* RTE_BIT64(53) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE64),
+ /* RTE_BIT64(54) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE56),
+ /* RTE_BIT64(55) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE48),
+ /* RTE_BIT64(56) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE40),
+ /* RTE_BIT64(57) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE32),
+
+ /* RTE_BIT64(58) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_DST_ONLY),
+ /* RTE_BIT64(59) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_SRC_ONLY),
+ /* RTE_BIT64(60) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_DST_ONLY),
+ /* RTE_BIT64(61) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_SRC_ONLY),
+ /* RTE_BIT64(62) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_DST_ONLY),
+ /* RTE_BIT64(63) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_SRC_ONLY),
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask)
+{
+ if (str == NULL || str_len == 0)
+ return -1;
+
+ memset(str, 0x0, str_len);
+ uint16_t str_end = 0;
+ const struct rss_type_info *start = rss_to_string;
+
+ for (const struct rss_type_info *p = start; p != start + ARRAY_SIZE(rss_to_string); ++p) {
+ if (p->rss_type & hash_mask) {
+ if (strlen(prefix) + strlen(p->str) < (size_t)(str_len - str_end)) {
+ snprintf(str + str_end, str_len - str_end, "%s", prefix);
+ str_end += strlen(prefix);
+ snprintf(str + str_end, str_len - str_end, "%s", p->str);
+ str_end += strlen(p->str);
+
+ } else {
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Hash
+ */
+
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm)
+{
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ switch (algorithm) {
+ case HASH_ALGO_5TUPLE:
+ /* need to create an IPv6 hashing and enable the adaptive ip mask bit */
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_OFS, hsh_idx, 0, -16);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_PE, hsh_idx, 0, DYN_L4);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_PE, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_P, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 1, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 2, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 3, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 4, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 5, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 6, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 7, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 8, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 9, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_VALID, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_TYPE, hsh_idx, 0, HASH_5TUPLE);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0, 1);
+
+ NT_LOG(DBG, FILTER, "Set IPv6 5-tuple hasher with adaptive IPv4 hashing");
+ break;
+
+ default:
+ case HASH_ALGO_ROUND_ROBIN:
+ /* zero is round-robin */
+ break;
+ }
+
+ return 0;
+}
+
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.c b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
new file mode 100644
index 0000000000..86dfc16e79
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
@@ -0,0 +1,156 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <math.h>
+
+#include "flow_hasher.h"
+
+static uint32_t shuffle(uint32_t x)
+{
+ return ((x & 0x00000002) << 29) | ((x & 0xAAAAAAA8) >> 3) | ((x & 0x15555555) << 3) |
+ ((x & 0x40000000) >> 29);
+}
+
+static uint32_t ror_inv(uint32_t x, const int s)
+{
+ return (x >> s) | ((~x) << (32 - s));
+}
+
+static uint32_t combine(uint32_t x, uint32_t y)
+{
+ uint32_t x1 = ror_inv(x, 15);
+ uint32_t x2 = ror_inv(x, 13);
+ uint32_t y1 = ror_inv(y, 3);
+ uint32_t y2 = ror_inv(y, 27);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint32_t mix(uint32_t x, uint32_t y)
+{
+ return shuffle(combine(x, y));
+}
+
+static uint64_t ror_inv3(uint64_t x)
+{
+ const uint64_t m = 0xE0000000E0000000ULL;
+
+ return ((x >> 3) | m) ^ ((x << 29) & m);
+}
+
+static uint64_t ror_inv13(uint64_t x)
+{
+ const uint64_t m = 0xFFF80000FFF80000ULL;
+
+ return ((x >> 13) | m) ^ ((x << 19) & m);
+}
+
+static uint64_t ror_inv15(uint64_t x)
+{
+ const uint64_t m = 0xFFFE0000FFFE0000ULL;
+
+ return ((x >> 15) | m) ^ ((x << 17) & m);
+}
+
+static uint64_t ror_inv27(uint64_t x)
+{
+ const uint64_t m = 0xFFFFFFE0FFFFFFE0ULL;
+
+ return ((x >> 27) | m) ^ ((x << 5) & m);
+}
+
+static uint64_t shuffle64(uint64_t x)
+{
+ return ((x & 0x0000000200000002) << 29) | ((x & 0xAAAAAAA8AAAAAAA8) >> 3) |
+ ((x & 0x1555555515555555) << 3) | ((x & 0x4000000040000000) >> 29);
+}
+
+static uint64_t pair(uint32_t x, uint32_t y)
+{
+ return ((uint64_t)x << 32) | y;
+}
+
+static uint64_t combine64(uint64_t x, uint64_t y)
+{
+ uint64_t x1 = ror_inv15(x);
+ uint64_t x2 = ror_inv13(x);
+ uint64_t y1 = ror_inv3(y);
+ uint64_t y2 = ror_inv27(y);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint64_t mix64(uint64_t x, uint64_t y)
+{
+ return shuffle64(combine64(x, y));
+}
+
+static uint32_t calc16(const uint32_t key[16])
+{
+ /*
+ * 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Layer 0
+ * \./ \./ \./ \./ \./ \./ \./ \./
+ * 0 1 2 3 4 5 6 7 Layer 1
+ * \__.__/ \__.__/ \__.__/ \__.__/
+ * 0 1 2 3 Layer 2
+ * \______.______/ \______.______/
+ * 0 1 Layer 3
+ * \______________.______________/
+ * 0 Layer 4
+ * / \
+ * \./
+ * 0 Layer 5
+ * / \
+ * \./ Layer 6
+ * value
+ */
+
+ uint64_t z;
+ uint32_t x;
+
+ z = mix64(mix64(mix64(pair(key[0], key[8]), pair(key[1], key[9])),
+ mix64(pair(key[2], key[10]), pair(key[3], key[11]))),
+ mix64(mix64(pair(key[4], key[12]), pair(key[5], key[13])),
+ mix64(pair(key[6], key[14]), pair(key[7], key[15]))));
+
+ x = mix((uint32_t)(z >> 32), (uint32_t)z);
+ x = mix(x, ror_inv(x, 17));
+ x = combine(x, ror_inv(x, 17));
+
+ return x;
+}
+
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result)
+{
+ uint64_t val;
+ uint32_t res;
+
+ val = calc16(key);
+ res = (uint32_t)val;
+
+ if (hsh->cam_bw > 32)
+ val = (val << (hsh->cam_bw - 32)) ^ val;
+
+ for (int i = 0; i < hsh->banks; i++) {
+ result[i] = (unsigned int)(val & hsh->cam_records_bw_mask);
+ val = val >> hsh->cam_records_bw;
+ }
+
+ return res;
+}
+
+int init_hasher(struct hasher_s *hsh, int banks, int nb_records)
+{
+ hsh->banks = banks;
+ hsh->cam_records_bw = (int)(log2(nb_records - 1) + 1);
+ hsh->cam_records_bw_mask = (1U << hsh->cam_records_bw) - 1;
+ hsh->cam_bw = hsh->banks * hsh->cam_records_bw;
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.h b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
new file mode 100644
index 0000000000..15de8e9933
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_HASHER_H_
+#define _FLOW_HASHER_H_
+
+#include <stdint.h>
+
+struct hasher_s {
+ int banks;
+ int cam_records_bw;
+ uint32_t cam_records_bw_mask;
+ int cam_bw;
+};
+
+int init_hasher(struct hasher_s *hsh, int _banks, int nb_records);
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result);
+
+#endif /* _FLOW_HASHER_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 30d6ea728e..f79919cb81 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -9,6 +9,7 @@
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
#include "nt_util.h"
+#include "flow_hasher.h"
#define MAX_QWORDS 2
#define MAX_SWORDS 2
@@ -75,10 +76,25 @@ static int tcam_find_mapping(struct km_flow_def_s *km);
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
{
+ /*
+ * KM entries occupied in CAM - to manage the cuckoo shuffling
+ * and manage CAM population and usage
+ * KM entries occupied in TCAM - to manage population and usage
+ */
+ if (!*handle) {
+ *handle = calloc(1,
+ (size_t)CAM_ENTRIES + sizeof(uint32_t) + (size_t)TCAM_ENTRIES +
+ sizeof(struct hasher_s));
+ NT_LOG(DBG, FILTER, "Allocate NIC DEV CAM and TCAM record manager");
+ }
+
km->cam_dist = (struct cam_distrib_s *)*handle;
km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
km->tcam_dist =
(struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+
+ km->hsh = (struct hasher_s *)((char *)km->tcam_dist + TCAM_ENTRIES);
+ init_hasher(km->hsh, km->be->km.nb_cam_banks, km->be->km.nb_cam_records);
}
void km_free_ndev_resource_management(void **handle)
@@ -839,9 +855,18 @@ static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx
static int km_write_data_to_cam(struct km_flow_def_s *km)
{
int res = 0;
+ int val[MAX_BANKS];
assert(km->be->km.nb_cam_banks <= MAX_BANKS);
assert(km->cam_dist);
+ /* word list without info set */
+ gethash(km->hsh, km->entry_word, val);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ /* if paired we start always on an even address - reset bit 0 */
+ km->record_indexes[i] = (km->cam_paired) ? val[i] & ~1 : val[i];
+ }
+
NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
km->record_indexes[1], km->record_indexes[2]);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
index df5c00ac42..1750d09afb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
@@ -89,3 +89,182 @@ int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->hsh_rcp_flush(be->be_dev, &be->hsh, start_idx, count);
}
+
+static int hw_mod_hsh_rcp_mod(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t *value, int get)
+{
+ if (index >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 5:
+ switch (field) {
+ case HW_HSH_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->hsh.v5.rcp[index], (uint8_t)*value,
+ sizeof(struct hsh_v5_rcp_s));
+ break;
+
+ case HW_HSH_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off);
+ break;
+
+ case HW_HSH_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off,
+ be->hsh.nb_rcp);
+ break;
+
+ case HW_HSH_RCP_LOAD_DIST_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].load_dist_type, value);
+ break;
+
+ case HW_HSH_RCP_MAC_PORT_MASK:
+ if (word_off > HSH_RCP_MAC_PORT_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].mac_port_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SORT:
+ GET_SET(be->hsh.v5.rcp[index].sort, value);
+ break;
+
+ case HW_HSH_RCP_QW0_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw0_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_HSH_RCP_QW4_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw4_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_PE:
+ GET_SET(be->hsh.v5.rcp[index].w8_pe, value);
+ break;
+
+ case HW_HSH_RCP_W8_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w8_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w8_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_PE:
+ GET_SET(be->hsh.v5.rcp[index].w9_pe, value);
+ break;
+
+ case HW_HSH_RCP_W9_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w9_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W9_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w9_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_P:
+ GET_SET(be->hsh.v5.rcp[index].w9_p, value);
+ break;
+
+ case HW_HSH_RCP_P_MASK:
+ GET_SET(be->hsh.v5.rcp[index].p_mask, value);
+ break;
+
+ case HW_HSH_RCP_WORD_MASK:
+ if (word_off > HSH_RCP_WORD_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].word_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SEED:
+ GET_SET(be->hsh.v5.rcp[index].seed, value);
+ break;
+
+ case HW_HSH_RCP_TNL_P:
+ GET_SET(be->hsh.v5.rcp[index].tnl_p, value);
+ break;
+
+ case HW_HSH_RCP_HSH_VALID:
+ GET_SET(be->hsh.v5.rcp[index].hsh_valid, value);
+ break;
+
+ case HW_HSH_RCP_HSH_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].hsh_type, value);
+ break;
+
+ case HW_HSH_RCP_TOEPLITZ:
+ GET_SET(be->hsh.v5.rcp[index].toeplitz, value);
+ break;
+
+ case HW_HSH_RCP_K:
+ if (word_off > HSH_RCP_KEY_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].k[word_off], value);
+ break;
+
+ case HW_HSH_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->hsh.v5.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 5 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value)
+{
+ return hw_mod_hsh_rcp_mod(be, field, index, word_off, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4737460cdf..068c890b45 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,9 +30,15 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_hsh {
+ struct hw_db_inline_hsh_data data;
+ int ref;
+ } *hsh;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_hsh;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -122,6 +128,21 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
}
+ db->cfn = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cfn));
+
+ if (db->cfn == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_hsh = ndev->be.hsh.nb_rcp;
+ db->hsh = calloc(db->nb_hsh, sizeof(struct hw_db_inline_resource_db_hsh));
+
+ if (db->hsh == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -133,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->hsh);
+
free(db->cat);
if (db->km) {
@@ -180,6 +203,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_HSH:
+ hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -219,6 +246,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_KM_FT:
return NULL; /* FTs can't be easily looked up */
+ case HW_DB_IDX_TYPE_HSH:
+ return &db->hsh[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -247,6 +277,7 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
{
(void)ft;
(void)qsl_hw_id;
+ (void)ft;
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
@@ -848,3 +879,114 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* HSH */
+/******************************************************************************/
+
+static int hw_db_inline_hsh_compare(const struct hw_db_inline_hsh_data *data1,
+ const struct hw_db_inline_hsh_data *data2)
+{
+ for (uint32_t i = 0; i < MAX_RSS_KEY_LEN; ++i)
+ if (data1->key[i] != data2->key[i])
+ return 0;
+
+ return data1->func == data2->func && data1->hash_mask == data2->hash_mask;
+}
+
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_hsh_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_HSH;
+
+ /* check if default hash configuration shall be used, i.e. rss_hf is not set */
+ /*
+ * NOTE: hsh id 0 is reserved for "default"
+ * HSH used by port configuration; All ports share the same default hash settings.
+ */
+ if (data->hash_mask == 0) {
+ idx.ids = 0;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_hsh; ++i) {
+ int ref = db->hsh[i].ref;
+
+ if (ref > 0 && hw_db_inline_hsh_compare(data, &db->hsh[i].data)) {
+ idx.ids = i;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ struct nt_eth_rss_conf tmp_rss_conf;
+
+ tmp_rss_conf.rss_hf = data->hash_mask;
+ memcpy(tmp_rss_conf.rss_key, data->key, MAX_RSS_KEY_LEN);
+ tmp_rss_conf.algorithm = data->func;
+ int res = flow_nic_set_hasher_fields(ndev, idx.ids, tmp_rss_conf);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->hsh[idx.ids].ref = 1;
+ memcpy(&db->hsh[idx.ids].data, data, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, idx.ids);
+
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->hsh[idx.ids].ref += 1;
+}
+
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->hsh[idx.ids].ref -= 1;
+
+ if (db->hsh[idx.ids].ref <= 0) {
+ /*
+ * NOTE: hsh id 0 is reserved for "default" HSH used by
+ * port configuration, so we shall keep it even if
+ * it is not used by any flow
+ */
+ if (idx.ids > 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, idx.ids, 0, 0x0);
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->hsh[idx.ids].data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_free_resource(ndev, RES_HSH_RCP, idx.ids);
+ }
+
+ db->hsh[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index e104ba7327..c97bdef1b7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -60,6 +60,10 @@ struct hw_db_km_ft {
HW_DB_IDX;
};
+struct hw_db_hsh_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
@@ -68,6 +72,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_SLC_LR,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
+ HW_DB_IDX_TYPE_HSH,
};
/* Functionality data types */
@@ -133,6 +138,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_hsh_idx hsh;
};
};
};
@@ -175,6 +181,11 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data);
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index beb7fe2cb3..ebdf68385e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -25,6 +25,15 @@
#define NT_VIOLATING_MBR_CFN 0
#define NT_VIOLATING_MBR_QSL 1
+#define RTE_ETH_RSS_UDP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)
+
+#define RTE_ETH_RSS_TCP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX)
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2323,10 +2332,27 @@ static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_d
}
}
+static void setup_db_hsh_data(struct nic_flow_def *fd, struct hw_db_inline_hsh_data *hsh_data)
+{
+ memset(hsh_data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+
+ hsh_data->func = fd->hsh.func;
+ hsh_data->hash_mask = fd->hsh.types;
+
+ if (fd->hsh.key != NULL) {
+ /*
+ * Just a safeguard. Check and error handling of rss_key_len
+ * shall be done at api layers above.
+ */
+ memcpy(&hsh_data->key, fd->hsh.key,
+ fd->hsh.key_len < MAX_RSS_KEY_LEN ? fd->hsh.key_len : MAX_RSS_KEY_LEN);
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
- const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data,
uint32_t group __rte_unused,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
@@ -2363,6 +2389,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle, hsh_data);
+ local_idxs[(*local_idx_counter)++] = hsh_idx.raw;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2406,6 +2443,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
if (attr->group > 0 && fd_has_empty_pattern(fd)) {
/*
@@ -2489,6 +2527,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle,
+ &hsh_data);
+ fh->db_idxs[fh->db_idx_counter++] = hsh_idx.raw;
+ action_set_data.hsh = hsh_idx;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2668,6 +2719,126 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
return NULL;
}
+/*
+ * Public functions
+ */
+
+/*
+ * FPGA uses up to 10 32-bit words (320 bits) for hash calculation + 8 bits for L4 protocol number.
+ * Hashed data are split between two 128-bit Quad Words (QW)
+ * and two 32-bit Words (W), which can refer to different header parts.
+ */
+enum hsh_words_id {
+ HSH_WORDS_QW0 = 0,
+ HSH_WORDS_QW4,
+ HSH_WORDS_W8,
+ HSH_WORDS_W9,
+ HSH_WORDS_SIZE,
+};
+
+/* struct with details about hash QWs & Ws */
+struct hsh_words {
+ /*
+ * index of W (word) or index of 1st word of QW (quad word)
+ * is used for hash mask calculation
+ */
+ uint8_t index;
+ uint8_t toeplitz_index; /* offset in Bytes of given [Q]W inside Toeplitz RSS key */
+ enum hw_hsh_e pe; /* offset to header part, e.g. beginning of L4 */
+ enum hw_hsh_e ofs; /* relative offset in BYTES to 'pe' header offset above */
+ uint16_t bit_len; /* max length of header part in bits to fit into QW/W */
+ bool free; /* only free words can be used for hsh calculation */
+};
+
+static enum hsh_words_id get_free_word(struct hsh_words *words, uint16_t bit_len)
+{
+ enum hsh_words_id ret = HSH_WORDS_SIZE;
+ uint16_t ret_bit_len = UINT16_MAX;
+
+ for (enum hsh_words_id i = HSH_WORDS_QW0; i < HSH_WORDS_SIZE; i++) {
+ if (words[i].free && bit_len <= words[i].bit_len &&
+ words[i].bit_len < ret_bit_len) {
+ ret = i;
+ ret_bit_len = words[i].bit_len;
+ }
+ }
+
+ return ret;
+}
+
+static int flow_nic_set_hasher_part_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct hsh_words *words, uint32_t pe, uint32_t ofs,
+ int bit_len, bool toeplitz)
+{
+ int res = 0;
+
+ /* check if there is any free word, which can accommodate header part of given 'bit_len' */
+ enum hsh_words_id word = get_free_word(words, bit_len);
+
+ if (word == HSH_WORDS_SIZE) {
+ NT_LOG(ERR, FILTER, "Cannot add additional %d bits into hash", bit_len);
+ return -1;
+ }
+
+ words[word].free = false;
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].pe, hsh_idx, 0, pe);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].pe,
+ hsh_idx, pe);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].ofs, hsh_idx, 0, ofs);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].ofs,
+ hsh_idx, ofs);
+
+ /* set HW_HSH_RCP_WORD_MASK based on used QW/W and given 'bit_len' */
+ int mask_bit_len = bit_len;
+ uint32_t mask = 0x0;
+ uint32_t mask_be = 0x0;
+ uint32_t toeplitz_mask[9] = { 0x0 };
+ /* iterate through all words of QW */
+ uint16_t words_count = words[word].bit_len / 32;
+
+ for (uint16_t mask_off = 1; mask_off <= words_count; mask_off++) {
+ if (mask_bit_len >= 32) {
+ mask_bit_len -= 32;
+ mask = 0xffffffff;
+ mask_be = mask;
+
+ } else if (mask_bit_len > 0) {
+ /* keep bits from left to right, i.e. little to big endian */
+ mask_be = 0xffffffff >> (32 - mask_bit_len);
+ mask = mask_be << (32 - mask_bit_len);
+ mask_bit_len = 0;
+
+ } else {
+ mask = 0x0;
+ mask_be = 0x0;
+ }
+
+ /* reorder QW words mask from little to big endian */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx,
+ words[word].index + words_count - mask_off, mask);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, words[word].index + words_count - mask_off, mask);
+ toeplitz_mask[words[word].toeplitz_index + mask_off - 1] = mask_be;
+ }
+
+ if (toeplitz) {
+ NT_LOG(DBG, FILTER,
+ "Partial Toeplitz RSS key mask: %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 "",
+ toeplitz_mask[8], toeplitz_mask[7], toeplitz_mask[6], toeplitz_mask[5],
+ toeplitz_mask[4], toeplitz_mask[3], toeplitz_mask[2], toeplitz_mask[1],
+ toeplitz_mask[0]);
+ NT_LOG(DBG, FILTER,
+ " MSB LSB");
+ }
+
+ return res;
+}
+
/*
* Public functions
*/
@@ -2718,6 +2889,12 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+ /* Set default hasher recipe to 5-tuple */
+ flow_nic_set_hasher(ndev, 0, HASH_ALGO_5TUPLE);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -2784,6 +2961,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, 0, 0, 0);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
@@ -2981,6 +3162,672 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
+{
+ return (hash_mask & hash_bits) == hash_bits;
+}
+
+static __rte_always_inline void unset_bits(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ *hash_mask &= ~hash_bits;
+}
+
+static __rte_always_inline void unset_bits_and_log(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", *hash_mask & hash_bits) == 0)
+ NT_LOG(DBG, FILTER, "Configured RSS types:%s", rss_buffer);
+
+ unset_bits(hash_mask, hash_bits);
+}
+
+static __rte_always_inline void unset_bits_if_all_enabled(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ if (all_bits_enabled(*hash_mask, hash_bits))
+ unset_bits(hash_mask, hash_bits);
+}
+
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ uint64_t fields = rss_conf.rss_hf;
+
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", fields) == 0)
+ NT_LOG(DBG, FILTER, "Requested RSS types:%s", rss_buffer);
+
+ /*
+ * configure all (Q)Words usable for hash calculation
+ * Hash can be calculated from 4 independent header parts:
+ * | QW0 | Qw4 | W8| W9|
+ * word | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
+ */
+ struct hsh_words words[HSH_WORDS_SIZE] = {
+ { 0, 5, HW_HSH_RCP_QW0_PE, HW_HSH_RCP_QW0_OFS, 128, true },
+ { 4, 1, HW_HSH_RCP_QW4_PE, HW_HSH_RCP_QW4_OFS, 128, true },
+ { 8, 0, HW_HSH_RCP_W8_PE, HW_HSH_RCP_W8_OFS, 32, true },
+ {
+ 9, 255, HW_HSH_RCP_W9_PE, HW_HSH_RCP_W9_OFS, 32,
+ true
+ }, /* not supported for Toeplitz */
+ };
+
+ int res = 0;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+ /* enable hashing */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+
+ /* configure selected hash function and its key */
+ bool toeplitz = false;
+
+ switch (rss_conf.algorithm) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /* Use default NTH10 hashing algorithm */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 0);
+ /* Use 1st 32-bits from rss_key to configure NTH10 SEED */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0,
+ rss_conf.rss_key[0] << 24 | rss_conf.rss_key[1] << 16 |
+ rss_conf.rss_key[2] << 8 | rss_conf.rss_key[3]);
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ toeplitz = true;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 1);
+ uint8_t empty_key = 0;
+
+ /* Toeplitz key (always 40B) must be encoded from little to big endian */
+ for (uint8_t i = 0; i <= (MAX_RSS_KEY_LEN - 8); i += 8) {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 |
+ rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 |
+ rss_conf.rss_key[i + 7]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 | rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 | rss_conf.rss_key[i + 7]);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 |
+ rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 |
+ rss_conf.rss_key[i + 3]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 | rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 | rss_conf.rss_key[i + 3]);
+ empty_key |= rss_conf.rss_key[i] | rss_conf.rss_key[i + 1] |
+ rss_conf.rss_key[i + 2] | rss_conf.rss_key[i + 3] |
+ rss_conf.rss_key[i + 4] | rss_conf.rss_key[i + 5] |
+ rss_conf.rss_key[i + 6] | rss_conf.rss_key[i + 7];
+ }
+
+ if (empty_key == 0) {
+ NT_LOG(ERR, FILTER,
+ "Toeplitz key must be configured. Key with all bytes set to zero is not allowed.");
+ return -1;
+ }
+
+ words[HSH_WORDS_W9].free = false;
+ NT_LOG(DBG, FILTER,
+ "Toeplitz hashing is enabled thus W9 and P_MASK cannot be used.");
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Unknown hashing function %d requested", rss_conf.algorithm);
+ return -1;
+ }
+
+ /* indication that some IPv6 flag is present */
+ bool ipv6 = fields & (NT_ETH_RSS_IPV6_MASK);
+ /* store proto mask for later use at IP and L4 checksum handling */
+ uint64_t l4_proto_mask = fields &
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX);
+
+ /* outermost headers are used by default, so innermost bit takes precedence if detected */
+ bool outer = (fields & RTE_ETH_RSS_LEVEL_INNERMOST) ? false : true;
+ unset_bits(&fields, RTE_ETH_RSS_LEVEL_MASK);
+
+ if (fields == 0) {
+ NT_LOG(ERR, FILTER, "RSS hash configuration 0x%" PRIX64 " is not valid.",
+ rss_conf.rss_hf);
+ return -1;
+ }
+
+ /* indication that IPv4 `protocol` or IPv6 `next header` fields shall be part of the hash
+ */
+ bool l4_proto_hash = false;
+
+ /*
+ * check if SRC_ONLY & DST_ONLY are used simultaneously;
+ * According to DPDK, we shall behave like none of these bits is set
+ */
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ /* L2 */
+ if (fields & (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 6, 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 96, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 6,
+ 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 96, toeplitz);
+ }
+
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY |
+ RTE_ETH_RSS_L2_DST_ONLY);
+ }
+
+ /*
+ * VLAN support of multiple VLAN headers,
+ * where S-VLAN is the first and C-VLAN the last VLAN header
+ */
+ if (fields & RTE_ETH_RSS_C_VLAN) {
+ /*
+ * use MPLS protocol offset, which points just after ethertype with relative
+ * offset -6 (i.e. 2 bytes
+ * of ethertype & size + 4 bytes of VLAN header field) to access last vlan header
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, -6,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ -6, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_C_VLAN);
+ }
+
+ if (fields & RTE_ETH_RSS_S_VLAN) {
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_FIRST_VLAN, 0, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_VLAN,
+ 0, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_S_VLAN);
+ }
+ /* L2 payload */
+ /* calculate hash of 128-bits of l2 payload; Use MPLS protocol offset to address the
+ * beginning of L2 payload even if MPLS header is not present
+ */
+ if (fields & RTE_ETH_RSS_L2_PAYLOAD) {
+ uint64_t outer_fields_enabled = 0;
+
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ 0, 128, toeplitz);
+ outer_fields_enabled = fields & RTE_ETH_RSS_GTPU;
+ }
+
+ /*
+ * L2 PAYLOAD hashing overrides all L3 & L4 RSS flags.
+ * Thus we can clear all remaining (supported)
+ * RSS flags...
+ */
+ unset_bits_and_log(&fields, NT_ETH_RSS_OFFLOAD_MASK);
+ /*
+ * ...but in case of INNER L2 PAYLOAD we must process
+ * "always outer" GTPU field if enabled
+ */
+ fields |= outer_fields_enabled;
+ }
+
+ /* L3 + L4 protocol number */
+ if (fields & RTE_ETH_RSS_IPV4_CHKSUM) {
+ /* only IPv4 checksum is supported by DPDK RTE_ETH_RSS_* types */
+ if (ipv6) {
+ NT_LOG(ERR, FILTER,
+ "RSS: IPv4 checksum requested with IPv6 header hashing!");
+ res = 1;
+
+ } else if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L3, 10,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L3,
+ 10, 16, toeplitz);
+ }
+
+ /*
+ * L3 checksum is made from whole L3 header, i.e. no need to process other
+ * L3 hashing flags
+ */
+ unset_bits_and_log(&fields, RTE_ETH_RSS_IPV4_CHKSUM | NT_ETH_RSS_IP_MASK);
+ }
+
+ if (fields & NT_ETH_RSS_IP_MASK) {
+ if (ipv6) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & (RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6)) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 32, toeplitz);
+ }
+ }
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0,
+ 1);
+
+ } else {
+ /* IPv4 */
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 32, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 16,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 64, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 32,
+ toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 16, 32,
+ toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 64,
+ toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & RTE_ETH_RSS_FRAG_IPV4) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 16, toeplitz);
+ }
+ }
+ }
+
+ /* check if L4 protocol type shall be part of hash */
+ if (l4_proto_mask)
+ l4_proto_hash = true;
+
+ unset_bits_and_log(&fields, NT_ETH_RSS_IP_MASK);
+ }
+
+ /* L4 */
+ if (fields & (RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 2, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 32, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 2,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 32, toeplitz);
+ }
+
+ l4_proto_hash = true;
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY);
+ }
+
+ /* IPv4 protocol / IPv6 next header fields */
+ if (l4_proto_hash) {
+ /* NOTE: HW_HSH_RCP_P_MASK is not supported for Toeplitz and thus one of SW0, SW4
+ * or W8 must be used to hash on `protocol` field of IPv4 or `next header` field of
+ * IPv6 header.
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 6, 8,
+ toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 9, 8,
+ toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 0);
+ }
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 6, 8, toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 9, 8, toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 1);
+ }
+ }
+
+ l4_proto_hash = false;
+ }
+
+ /*
+ * GTPU - for UPF use cases we always use TEID from outermost GTPU header
+ * even if other headers are innermost
+ */
+ if (fields & RTE_ETH_RSS_GTPU) {
+ NT_LOG(DBG, FILTER, "Set outer GTPU TEID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L4_PAYLOAD, 4, 32,
+ toeplitz);
+ unset_bits_and_log(&fields, RTE_ETH_RSS_GTPU);
+ }
+
+ /* Checksums */
+ /* only UDP, TCP and SCTP checksums are supported */
+ if (fields & RTE_ETH_RSS_L4_CHKSUM) {
+ switch (l4_proto_mask) {
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_UDP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 6, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 6, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_TCP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 16, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 16, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 8, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 8, 32,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+
+ /* none or unsupported protocol was chosen */
+ case 0:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing is supported only for UDP, TCP and SCTP protocols");
+ res = -1;
+ break;
+
+ /* multiple L4 protocols were selected */
+ default:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing can be enabled just for one of UDP, TCP or SCTP protocols");
+ res = -1;
+ break;
+ }
+ }
+
+ if (fields || res != 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", rss_conf.rss_hf) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration%s is not supported for hash func %s.",
+ rss_buffer,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration 0x%" PRIX64
+ " is not supported for hash func %s.",
+ rss_conf.rss_hf,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+ }
+
+ return -1;
+ }
+
+ return res;
+}
+
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -2994,6 +3841,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b87f8542ac..e623bb2352 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,4 +38,8 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 149c549112..1069be2f85 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -252,6 +252,10 @@ struct profile_inline_ops {
int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 32/73] net/ntnic: add TPE module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (30 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 31/73] net/ntnic: add hash API Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 33/73] net/ntnic: add FLM module Serhii Iliushyk
` (41 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The TX Packet Editor is a software abstraction module,
that keeps track of the handful of FPGA modules
that are used to edit packets in the TX pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 16 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 373 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 70 ++
.../profile_inline/flow_api_profile_inline.c | 127 ++-
5 files changed, 1342 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index cee148807a..e16dcd478f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -889,24 +889,40 @@ void hw_mod_tpe_free(struct flow_api_backend_s *be);
int hw_mod_tpe_reset(struct flow_api_backend_s *be);
int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value);
int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
enum debug_mode_e {
FLOW_BACKEND_DEBUG_MODE_NONE = 0x0000,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index 0d73b795d5..ba8f2d0dbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -169,6 +169,82 @@ int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpp_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpp_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpp_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPP_RCP_EXP:
+ GET_SET(be->tpe.v3.rpp_rcp[index].exp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* IFR_RCP
*/
@@ -203,6 +279,90 @@ int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ins_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ins_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.ins_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_ins_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_INS_RCP_DYN:
+ GET_SET(be->tpe.v3.ins_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_INS_RCP_OFS:
+ GET_SET(be->tpe.v3.ins_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_INS_RCP_LEN:
+ GET_SET(be->tpe.v3.ins_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ins_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RCP
*/
@@ -220,6 +380,102 @@ int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v3_rpl_v4_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RCP_DYN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_RPL_RCP_OFS:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_RPL_RCP_LEN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].len, value);
+ break;
+
+ case HW_TPE_RPL_RCP_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_RCP_EXT_PRIO:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ext_prio, value);
+ break;
+
+ case HW_TPE_RPL_RCP_ETH_TYPE_WR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].eth_type_wr, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_EXT
*/
@@ -237,6 +493,86 @@ int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_ext_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_ext_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_ext[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_ext_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value, be->tpe.nb_rpl_ext_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_EXT_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_ext[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_EXT_META_RPL_LEN:
+ GET_SET(be->tpe.v3.rpl_ext[index].meta_rpl_len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_ext_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RPL
*/
@@ -254,6 +590,89 @@ int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rpl_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rpl_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rpl[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_rpl_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value, be->tpe.nb_rpl_depth);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RPL_VALUE:
+ if (get)
+ memcpy(value, be->tpe.v3.rpl_rpl[index].value,
+ sizeof(uint32_t) * 4);
+
+ else
+ memcpy(be->tpe.v3.rpl_rpl[index].value, value,
+ sizeof(uint32_t) * 4);
+
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_tpe_rpl_rpl_mod(be, field, index, value, 0);
+}
+
/*
* CPY_RCP
*/
@@ -273,6 +692,96 @@ int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_cpy_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_cpy_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ const uint32_t cpy_size = be->tpe.nb_cpy_writers * be->tpe.nb_rcp_categories;
+
+ if (index >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.cpy_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_cpy_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value, cpy_size);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CPY_RCP_READER_SELECT:
+ GET_SET(be->tpe.v3.cpy_rcp[index].reader_select, value);
+ break;
+
+ case HW_TPE_CPY_RCP_DYN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_CPY_RCP_OFS:
+ GET_SET(be->tpe.v3.cpy_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_CPY_RCP_LEN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_cpy_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* HFU_RCP
*/
@@ -290,6 +799,166 @@ int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_hfu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_hfu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.hfu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_hfu_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_outer_l4_len, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_hfu_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* CSU_RCP
*/
@@ -306,3 +975,91 @@ int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_csu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+
+static int hw_mod_tpe_csu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.csu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_csu_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol4_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il4_cmd, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_csu_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 068c890b45..dec96fce85 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,6 +30,17 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_tpe {
+ struct hw_db_inline_tpe_data data;
+ int ref;
+ } *tpe;
+
+ struct hw_db_inline_resource_db_tpe_ext {
+ struct hw_db_inline_tpe_ext_data data;
+ int replace_ram_idx;
+ int ref;
+ } *tpe_ext;
+
struct hw_db_inline_resource_db_hsh {
struct hw_db_inline_hsh_data data;
int ref;
@@ -38,6 +49,8 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_tpe;
+ uint32_t nb_tpe_ext;
uint32_t nb_hsh;
/* Items */
@@ -101,6 +114,22 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_tpe = ndev->be.tpe.nb_rcp_categories;
+ db->tpe = calloc(db->nb_tpe, sizeof(struct hw_db_inline_resource_db_tpe));
+
+ if (db->tpe == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_tpe_ext = ndev->be.tpe.nb_rpl_ext_categories;
+ db->tpe_ext = calloc(db->nb_tpe_ext, sizeof(struct hw_db_inline_resource_db_tpe_ext));
+
+ if (db->tpe_ext == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -154,6 +183,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->tpe);
+ free(db->tpe_ext);
free(db->hsh);
free(db->cat);
@@ -195,6 +226,15 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_TPE:
+ hw_db_inline_tpe_deref(ndev, db_handle, *(struct hw_db_tpe_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ hw_db_inline_tpe_ext_deref(ndev, db_handle,
+ *(struct hw_db_tpe_ext_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -240,6 +280,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_SLC_LR:
return &db->slc_lr[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_TPE:
+ return &db->tpe[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ return &db->tpe_ext[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -652,6 +698,333 @@ void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
}
}
+/******************************************************************************/
+/* TPE */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_compare(const struct hw_db_inline_tpe_data *data1,
+ const struct hw_db_inline_tpe_data *data2)
+{
+ for (int i = 0; i < 6; ++i)
+ if (data1->writer[i].en != data2->writer[i].en ||
+ data1->writer[i].reader_select != data2->writer[i].reader_select ||
+ data1->writer[i].dyn != data2->writer[i].dyn ||
+ data1->writer[i].ofs != data2->writer[i].ofs ||
+ data1->writer[i].len != data2->writer[i].len)
+ return 0;
+
+ return data1->insert_len == data2->insert_len && data1->new_outer == data2->new_outer &&
+ data1->calc_eth_type_from_inner_ip == data2->calc_eth_type_from_inner_ip &&
+ data1->ttl_en == data2->ttl_en && data1->ttl_dyn == data2->ttl_dyn &&
+ data1->ttl_ofs == data2->ttl_ofs && data1->len_a_en == data2->len_a_en &&
+ data1->len_a_pos_dyn == data2->len_a_pos_dyn &&
+ data1->len_a_pos_ofs == data2->len_a_pos_ofs &&
+ data1->len_a_add_dyn == data2->len_a_add_dyn &&
+ data1->len_a_add_ofs == data2->len_a_add_ofs &&
+ data1->len_a_sub_dyn == data2->len_a_sub_dyn &&
+ data1->len_b_en == data2->len_b_en &&
+ data1->len_b_pos_dyn == data2->len_b_pos_dyn &&
+ data1->len_b_pos_ofs == data2->len_b_pos_ofs &&
+ data1->len_b_add_dyn == data2->len_b_add_dyn &&
+ data1->len_b_add_ofs == data2->len_b_add_ofs &&
+ data1->len_b_sub_dyn == data2->len_b_sub_dyn &&
+ data1->len_c_en == data2->len_c_en &&
+ data1->len_c_pos_dyn == data2->len_c_pos_dyn &&
+ data1->len_c_pos_ofs == data2->len_c_pos_ofs &&
+ data1->len_c_add_dyn == data2->len_c_add_dyn &&
+ data1->len_c_add_ofs == data2->len_c_add_ofs &&
+ data1->len_c_sub_dyn == data2->len_c_sub_dyn;
+}
+
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE;
+
+ for (uint32_t i = 1; i < db->nb_tpe; ++i) {
+ int ref = db->tpe[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_compare(data, &db->tpe[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe[idx.ids].ref = 1;
+ memcpy(&db->tpe[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_data));
+
+ if (data->insert_len > 0) {
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_RPP_RCP_EXP, idx.ids, data->insert_len);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_RPL_PTR, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_EXT_PRIO, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_ETH_TYPE_WR, idx.ids,
+ data->calc_eth_type_from_inner_ip);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+ }
+
+ for (uint32_t i = 0; i < 6; ++i) {
+ if (data->writer[i].en) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i,
+ data->writer[i].reader_select);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, data->writer[i].dyn);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, data->writer[i].ofs);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, data->writer[i].len);
+
+ } else {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, 0);
+ }
+
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_WR, idx.ids, data->len_a_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN, idx.ids,
+ data->new_outer);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_DYN, idx.ids,
+ data->len_a_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_OFS, idx.ids,
+ data->len_a_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_DYN, idx.ids,
+ data->len_a_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_OFS, idx.ids,
+ data->len_a_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_SUB_DYN, idx.ids,
+ data->len_a_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_WR, idx.ids, data->len_b_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_DYN, idx.ids,
+ data->len_b_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_OFS, idx.ids,
+ data->len_b_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_DYN, idx.ids,
+ data->len_b_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_OFS, idx.ids,
+ data->len_b_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_SUB_DYN, idx.ids,
+ data->len_b_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_WR, idx.ids, data->len_c_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_DYN, idx.ids,
+ data->len_c_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_OFS, idx.ids,
+ data->len_c_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_DYN, idx.ids,
+ data->len_c_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_OFS, idx.ids,
+ data->len_c_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_SUB_DYN, idx.ids,
+ data->len_c_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_WR, idx.ids, data->ttl_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_DYN, idx.ids, data->ttl_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_OFS, idx.ids, data->ttl_ofs);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe[idx.ids].ref -= 1;
+
+ if (db->tpe[idx.ids].ref <= 0) {
+ for (uint32_t i = 0; i < 6; ++i) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_PRESET_ALL,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->tpe[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_data));
+ db->tpe[idx.ids].ref = 0;
+ }
+}
+
+/******************************************************************************/
+/* TPE_EXT */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_ext_compare(const struct hw_db_inline_tpe_ext_data *data1,
+ const struct hw_db_inline_tpe_ext_data *data2)
+{
+ return data1->size == data2->size &&
+ memcmp(data1->hdr8, data2->hdr8, HW_DB_INLINE_MAX_ENCAP_SIZE) == 0;
+}
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_ext_idx idx = { .raw = 0 };
+ int rpl_rpl_length = ((int)data->size + 15) / 16;
+ int found = 0, rpl_rpl_index = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE_EXT;
+
+ if (data->size > HW_DB_INLINE_MAX_ENCAP_SIZE) {
+ idx.error = 1;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_tpe_ext; ++i) {
+ int ref = db->tpe_ext[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_ext_compare(data, &db->tpe_ext[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ext_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ rpl_rpl_index = flow_nic_alloc_resource_config(ndev, RES_TPE_RPL, rpl_rpl_length, 1);
+
+ if (rpl_rpl_index < 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe_ext[idx.ids].ref = 1;
+ db->tpe_ext[idx.ids].replace_ram_idx = rpl_rpl_index;
+ memcpy(&db->tpe_ext[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_ext_data));
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_RPL_PTR, idx.ids, rpl_rpl_index);
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_META_RPL_LEN, idx.ids, data->size);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_data[4];
+ memcpy(rpl_data, data->hdr32 + i * 4, sizeof(rpl_data));
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_data);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe_ext[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe_ext[idx.ids].ref -= 1;
+
+ if (db->tpe_ext[idx.ids].ref <= 0) {
+ const int rpl_rpl_length = ((int)db->tpe_ext[idx.ids].data.size + 15) / 16;
+ const int rpl_rpl_index = db->tpe_ext[idx.ids].replace_ram_idx;
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_zero[] = { 0, 0, 0, 0 };
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_zero);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, rpl_rpl_index + i);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ memset(&db->tpe_ext[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_ext_data));
+ db->tpe_ext[idx.ids].ref = 0;
+ }
+}
+
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c97bdef1b7..18d959307e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -52,6 +52,60 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_inline_tpe_data {
+ uint32_t insert_len : 16;
+ uint32_t new_outer : 1;
+ uint32_t calc_eth_type_from_inner_ip : 1;
+ uint32_t ttl_en : 1;
+ uint32_t ttl_dyn : 5;
+ uint32_t ttl_ofs : 8;
+
+ struct {
+ uint32_t en : 1;
+ uint32_t reader_select : 3;
+ uint32_t dyn : 5;
+ uint32_t ofs : 14;
+ uint32_t len : 5;
+ uint32_t padding : 4;
+ } writer[6];
+
+ uint32_t len_a_en : 1;
+ uint32_t len_a_pos_dyn : 5;
+ uint32_t len_a_pos_ofs : 8;
+ uint32_t len_a_add_dyn : 5;
+ uint32_t len_a_add_ofs : 8;
+ uint32_t len_a_sub_dyn : 5;
+
+ uint32_t len_b_en : 1;
+ uint32_t len_b_pos_dyn : 5;
+ uint32_t len_b_pos_ofs : 8;
+ uint32_t len_b_add_dyn : 5;
+ uint32_t len_b_add_ofs : 8;
+ uint32_t len_b_sub_dyn : 5;
+
+ uint32_t len_c_en : 1;
+ uint32_t len_c_pos_dyn : 5;
+ uint32_t len_c_pos_ofs : 8;
+ uint32_t len_c_add_dyn : 5;
+ uint32_t len_c_add_ofs : 8;
+ uint32_t len_c_sub_dyn : 5;
+};
+
+struct hw_db_inline_tpe_ext_data {
+ uint32_t size;
+ union {
+ uint8_t hdr8[HW_DB_INLINE_MAX_ENCAP_SIZE];
+ uint32_t hdr32[(HW_DB_INLINE_MAX_ENCAP_SIZE + 3) / 4];
+ };
+};
+
+struct hw_db_tpe_idx {
+ HW_DB_IDX;
+};
+struct hw_db_tpe_ext_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -70,6 +124,9 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_TPE,
+ HW_DB_IDX_TYPE_TPE_EXT,
+
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
@@ -138,6 +195,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
};
@@ -181,6 +239,18 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data);
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data);
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+
struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_hsh_data *data);
void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index ebdf68385e..35ecea28b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -18,6 +18,8 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
@@ -2420,6 +2422,92 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
}
}
+ /* Setup TPE EXT */
+ if (fd->tun_hdr.len > 0) {
+ assert(fd->tun_hdr.len <= HW_DB_INLINE_MAX_ENCAP_SIZE);
+
+ struct hw_db_inline_tpe_ext_data tpe_ext_data = {
+ .size = fd->tun_hdr.len,
+ };
+
+ memset(tpe_ext_data.hdr8, 0x0, HW_DB_INLINE_MAX_ENCAP_SIZE);
+ memcpy(tpe_ext_data.hdr8, fd->tun_hdr.d.hdr8, (fd->tun_hdr.len + 15) & ~15);
+
+ struct hw_db_tpe_ext_idx tpe_ext_idx =
+ hw_db_inline_tpe_ext_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_ext_data);
+ local_idxs[(*local_idx_counter)++] = tpe_ext_idx.raw;
+
+ if (tpe_ext_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE EXT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_rpl_ext_ptr)
+ *flm_rpl_ext_ptr = tpe_ext_idx.ids;
+ }
+
+ /* Setup TPE */
+ assert(fd->modify_field_count <= 6);
+
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip =
+ !fd->tun_hdr.new_outer && fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ tpe_data.writer[i].en = 1;
+ tpe_data.writer[i].reader_select = fd->modify_field[i].select;
+ tpe_data.writer[i].dyn = fd->modify_field[i].dyn;
+ tpe_data.writer[i].ofs = fd->modify_field[i].ofs;
+ tpe_data.writer[i].len = fd->modify_field[i].len;
+ }
+
+ if (fd->tun_hdr.new_outer) {
+ const int fcs_length = 4;
+
+ /* L4 length */
+ tpe_data.len_a_en = 1;
+ tpe_data.len_a_pos_dyn = DYN_L4;
+ tpe_data.len_a_pos_ofs = 4;
+ tpe_data.len_a_add_dyn = 18;
+ tpe_data.len_a_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_a_sub_dyn = DYN_L4;
+
+ /* L3 length */
+ tpe_data.len_b_en = 1;
+ tpe_data.len_b_pos_dyn = DYN_L3;
+ tpe_data.len_b_pos_ofs = fd->tun_hdr.ip_version == 4 ? 2 : 4;
+ tpe_data.len_b_add_dyn = 18;
+ tpe_data.len_b_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_b_sub_dyn = DYN_L3;
+
+ /* GTP length */
+ tpe_data.len_c_en = 1;
+ tpe_data.len_c_pos_dyn = DYN_L4_PAYLOAD;
+ tpe_data.len_c_pos_ofs = 2;
+ tpe_data.len_c_add_dyn = 18;
+ tpe_data.len_c_add_ofs = (uint32_t)(-8 - fcs_length) & 0xff;
+ tpe_data.len_c_sub_dyn = DYN_L4_PAYLOAD;
+ }
+
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle, &tpe_data);
+
+ local_idxs[(*local_idx_counter)++] = tpe_idx.raw;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
return 0;
}
@@ -2540,6 +2628,30 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup TPE */
+ if (fd->ttl_sub_enable) {
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip = !fd->tun_hdr.new_outer &&
+ fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_data);
+ fh->db_idxs[fh->db_idx_counter++] = tpe_idx.raw;
+ action_set_data.tpe = tpe_idx;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
}
/* Setup CAT */
@@ -2848,6 +2960,16 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (!ndev->flow_mgnt_prepared) {
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* KM Flow Type 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_mark_resource_used(ndev, RES_KM_CATEGORY, 0);
+
+ /* Reserved FLM Flow Types */
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_MISS_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_UNHANDLED_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_RCP, 0);
/* COT is locked to CFN. Don't set color for CFN 0 */
hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
@@ -2873,8 +2995,11 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
- /* SLC LR index 0 is reserved */
+ /* SLC LR & TPE index 0 were reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_EXT, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RPL, 0);
/* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 33/73] net/ntnic: add FLM module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (31 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 32/73] net/ntnic: add TPE module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
` (40 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 42 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 190 +++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 257 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 33 +++
.../profile_inline/flow_api_profile_inline.c | 224 ++++++++++++++-
.../flow_api_profile_inline_config.h | 129 +++++++++
drivers/net/ntnic/ntutil/nt_util.h | 8 +
8 files changed, 1113 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index e16dcd478f..de662c4ed1 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -367,6 +367,18 @@ int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
/* KCE/KCS/FTE KM */
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -374,6 +386,18 @@ int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
enum km_flm_if_select_e if_num, int index, uint32_t *value);
/* KCE/KCS/FTE FLM */
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -384,10 +408,14 @@ int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
@@ -638,7 +666,21 @@ int hw_mod_flm_reset(struct flow_api_backend_s *be);
int hw_mod_flm_control_flush(struct flow_api_backend_s *be);
int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+int hw_mod_flm_status_update(struct flow_api_backend_s *be);
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index 9164ec1ae0..985c821312 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -902,6 +902,95 @@ static int hw_mod_cat_kce_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kce_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kce_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v18.kce[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v21.kce[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* KCS
*/
@@ -925,6 +1014,95 @@ static int hw_mod_cat_kcs_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kcs_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kcs_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v18.kcs[index].category, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v21.kcs[index].category[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* FTE
*/
@@ -1094,6 +1272,12 @@ int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cte_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -1154,6 +1338,12 @@ int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cts_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 8c1f3f2d96..f5eaea7c4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -293,11 +293,268 @@ int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, u
return hw_mod_flm_control_mod(be, field, &value, 0);
}
+int hw_mod_flm_status_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_status_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_status_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STATUS_CALIB_SUCCESS:
+ GET_SET(be->flm.v25.status->calib_success, value);
+ break;
+
+ case HW_FLM_STATUS_CALIB_FAIL:
+ GET_SET(be->flm.v25.status->calib_fail, value);
+ break;
+
+ case HW_FLM_STATUS_INITDONE:
+ GET_SET(be->flm.v25.status->initdone, value);
+ break;
+
+ case HW_FLM_STATUS_IDLE:
+ GET_SET(be->flm.v25.status->idle, value);
+ break;
+
+ case HW_FLM_STATUS_CRITICAL:
+ GET_SET(be->flm.v25.status->critical, value);
+ break;
+
+ case HW_FLM_STATUS_PANIC:
+ GET_SET(be->flm.v25.status->panic, value);
+ break;
+
+ case HW_FLM_STATUS_CRCERR:
+ GET_SET(be->flm.v25.status->crcerr, value);
+ break;
+
+ case HW_FLM_STATUS_EFT_BP:
+ GET_SET(be->flm.v25.status->eft_bp, value);
+ break;
+
+ case HW_FLM_STATUS_CACHE_BUFFER_CRITICAL:
+ GET_SET(be->flm.v25.status->cache_buf_critical, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_status_mod(be, field, value, 1);
+}
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be)
{
return be->iface->flm_scan_flush(be->be_dev, &be->flm);
}
+static int hw_mod_flm_scan_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCAN_I:
+ GET_SET(be->flm.v25.scan->i, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_scan_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_load_bin_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_load_bin_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_LOAD_BIN:
+ GET_SET(be->flm.v25.load_bin->bin, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_load_bin_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_prio_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_prio_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PRIO_LIMIT0:
+ GET_SET(be->flm.v25.prio->limit0, value);
+ break;
+
+ case HW_FLM_PRIO_FT0:
+ GET_SET(be->flm.v25.prio->ft0, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT1:
+ GET_SET(be->flm.v25.prio->limit1, value);
+ break;
+
+ case HW_FLM_PRIO_FT1:
+ GET_SET(be->flm.v25.prio->ft1, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT2:
+ GET_SET(be->flm.v25.prio->limit2, value);
+ break;
+
+ case HW_FLM_PRIO_FT2:
+ GET_SET(be->flm.v25.prio->ft2, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT3:
+ GET_SET(be->flm.v25.prio->limit3, value);
+ break;
+
+ case HW_FLM_PRIO_FT3:
+ GET_SET(be->flm.v25.prio->ft3, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_prio_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count)
+{
+ if (count == ALL_ENTRIES)
+ count = be->flm.nb_pst_profiles;
+
+ if ((unsigned int)(start_idx + count) > be->flm.nb_pst_profiles) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ return be->iface->flm_pst_flush(be->be_dev, &be->flm, start_idx, count);
+}
+
+static int hw_mod_flm_pst_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.pst[index], (uint8_t)*value,
+ sizeof(struct flm_v25_pst_s));
+ break;
+
+ case HW_FLM_PST_BP:
+ GET_SET(be->flm.v25.pst[index].bp, value);
+ break;
+
+ case HW_FLM_PST_PP:
+ GET_SET(be->flm.v25.pst[index].pp, value);
+ break;
+
+ case HW_FLM_PST_TP:
+ GET_SET(be->flm.v25.pst[index].tp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_pst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index dec96fce85..61492090ce 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,14 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_FT_LOOKUP_KEY_A 0
+
+#define HW_DB_FT_TYPE_KM 1
+#define HW_DB_FT_LOOKUP_KEY_A 0
+#define HW_DB_FT_LOOKUP_KEY_C 2
+
+#define HW_DB_FT_TYPE_FLM 0
+#define HW_DB_FT_TYPE_KM 1
/******************************************************************************/
/* Handle */
/******************************************************************************/
@@ -59,6 +67,23 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_resource_db_flm_ft {
+ struct hw_db_inline_flm_ft_data data;
+ struct hw_db_flm_ft idx;
+ int ref;
+ } *ft;
+
+ struct hw_db_inline_resource_db_flm_match_set {
+ struct hw_db_match_set_idx idx;
+ int ref;
+ } *match_set;
+
+ struct hw_db_inline_resource_db_flm_cfn_map {
+ int cfn_idx;
+ } *cfn_map;
+ } *flm;
+
struct hw_db_inline_resource_db_km_rcp {
struct hw_db_inline_km_rcp_data data;
int ref;
@@ -70,6 +95,7 @@ struct hw_db_inline_resource_db {
} *km;
uint32_t nb_cat;
+ uint32_t nb_flm_ft;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -173,6 +199,13 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
*db_handle = db;
+
+ /* Preset data */
+
+ db->flm[0].ft[1].idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ db->flm[0].ft[1].idx.id1 = 1;
+ db->flm[0].ft[1].ref = 1;
+
return 0;
}
@@ -235,6 +268,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ hw_db_inline_flm_ft_deref(ndev, db_handle,
+ *(struct hw_db_flm_ft *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -286,6 +324,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -307,6 +348,61 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
/* Filter */
/******************************************************************************/
+/*
+ * lookup refers to key A/B/C/D, and can have values 0, 1, 2, and 3.
+ */
+static void hw_db_set_ft(struct flow_nic_dev *ndev, int type, int cfn_index, int lookup,
+ int flow_type, int enable)
+{
+ (void)type;
+ (void)enable;
+
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index = (8 * flow_type + cfn_index / cat_funcs) * max_lookups + lookup;
+ int fte_field = cfn_index % cat_funcs;
+
+ uint32_t current_bm = 0;
+ uint32_t fte_field_bm = 1 << fte_field;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t final_bm = enable ? (fte_field_bm | current_bm) : (~fte_field_bm & current_bm);
+
+ if (current_bm != final_bm) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
/*
* Setup a filter to match:
* All packets in CFN checks
@@ -348,6 +444,17 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
return -1;
+ /* KM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match FT=ft_argument for look-up C */
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft, 1);
+
/* Make all CFN checks TRUE */
if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
return -1;
@@ -1252,6 +1359,133 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+/******************************************************************************/
+/* FLM FT */
+/******************************************************************************/
+
+static int hw_db_inline_flm_ft_compare(const struct hw_db_inline_flm_ft_data *data1,
+ const struct hw_db_inline_flm_ft_data *data2)
+{
+ return data1->is_group_zero == data2->is_group_zero && data1->jump == data2->jump &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ if (data->is_group_zero) {
+ idx.error = 1;
+ return idx;
+ }
+
+ if (flm_rcp->ft[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->group];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ /* RCP 0 always uses FT 1; i.e. use unhandled FT for disabled RCP */
+ if (data->group == 0) {
+ idx.id1 = 1;
+ return idx;
+ }
+
+ if (data->is_group_zero) {
+ idx.id3 = 1;
+ return idx;
+ }
+
+ /* FLM_FT records 0, 1 and last (15) are reserved */
+ /* NOTE: RES_FLM_FLOW_TYPE resource is global and it cannot be used in _add() and _deref()
+ * to track usage of FLM_FT recipes which are group specific.
+ */
+ for (uint32_t i = 2; i < db->nb_flm_ft; ++i) {
+ if (!found && flm_rcp->ft[i].ref <= 0 &&
+ !flow_nic_is_resource_used(ndev, RES_FLM_FLOW_TYPE, i)) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (flm_rcp->ft[i].ref > 0 &&
+ hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error && idx.id3 == 0)
+ db->flm[idx.id2].ft[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+
+ if (idx.error || idx.id2 == 0 || idx.id3 > 0)
+ return;
+
+ flm_rcp = &db->flm[idx.id2];
+
+ flm_rcp->ft[idx.id1].ref -= 1;
+
+ if (flm_rcp->ft[idx.id1].ref > 0)
+ return;
+
+ flm_rcp->ft[idx.id1].ref = 0;
+ memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
+}
/******************************************************************************/
/* HSH */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 18d959307e..a520ae1769 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_match_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_action_set_idx {
HW_DB_IDX;
};
@@ -106,6 +110,13 @@ struct hw_db_tpe_ext_idx {
HW_DB_IDX;
};
+struct hw_db_flm_idx {
+ HW_DB_IDX;
+};
+struct hw_db_flm_ft {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -128,6 +139,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE_EXT,
HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -211,6 +223,17 @@ struct hw_db_inline_km_ft_data {
struct hw_db_action_set_idx action_set;
};
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -277,6 +300,16 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx);
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_ft idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 35ecea28b6..5ad2ceb4ca 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -11,6 +11,7 @@
#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
#include "stream_binary_flow_api.h"
@@ -47,6 +48,128 @@ static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
return -1;
}
+/*
+ * Flow Matcher functionality
+ */
+
+static int flm_sdram_calibrate(struct flow_nic_dev *ndev)
+{
+ int success = 0;
+ uint32_t fail_value = 0;
+ uint32_t value = 0;
+
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_PRESET_ALL, 0x0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_SPLIT_SDRAM_USAGE, 0x10);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for ddr4 calibration/init done */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_SUCCESS, &value);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_FAIL, &fail_value);
+
+ if (value & 0x80000000) {
+ success = 1;
+ break;
+ }
+
+ if (fail_value != 0)
+ break;
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - SDRAM calibration failed");
+ NT_LOG(ERR, FILTER,
+ "Calibration status: success 0x%08" PRIx32 " - fail 0x%08" PRIx32,
+ value, fail_value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
+{
+ int success = 0;
+
+ /*
+ * Make sure no lookup is performed during init, i.e.
+ * disable every category and disable FLM
+ */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for FLM to enter Idle state */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_IDLE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - Never idle");
+ return -1;
+ }
+
+ success = 0;
+
+ /* Start SDRAM initialization */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x1);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_INITDONE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER,
+ "FLM initialization failed - SDRAM initialization incomplete");
+ return -1;
+ }
+
+ /* Set the INIT value back to zero to clear the bit in the SW register cache */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Enable FLM */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, enable);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ int nb_rpp_per_ps = ndev->be.flm.nb_rpp_clock_in_ps;
+ int nb_load_aps_max = ndev->be.flm.nb_load_aps_max;
+ uint32_t scan_i_value = 0;
+
+ if (NTNIC_SCANNER_LOAD > 0) {
+ scan_i_value = (1 / (nb_rpp_per_ps * 0.000000000001)) /
+ (nb_load_aps_max * NTNIC_SCANNER_LOAD);
+ }
+
+ hw_mod_flm_scan_set(&ndev->be, HW_FLM_SCAN_I, scan_i_value);
+ hw_mod_flm_scan_flush(&ndev->be);
+
+ return 0;
+}
+
+
+
struct flm_flow_key_def_s {
union {
struct {
@@ -2355,11 +2478,11 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data,
- uint32_t group __rte_unused,
+ uint32_t group,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
- uint16_t *flm_rpl_ext_ptr __rte_unused,
- uint32_t *flm_ft __rte_unused,
+ uint16_t *flm_rpl_ext_ptr,
+ uint32_t *flm_ft,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
@@ -2508,6 +2631,25 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 0,
+ .group = group,
+ };
+ struct hw_db_flm_ft flm_ft_idx = empty_pattern
+ ? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
+ : hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ local_idxs[(*local_idx_counter)++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_ft)
+ *flm_ft = flm_ft_idx.id1;
+
return 0;
}
@@ -2515,7 +2657,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
- uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t num_dest_port, uint32_t num_queues,
uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
@@ -2809,6 +2951,21 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 1,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ };
+ struct hw_db_flm_ft flm_ft_idx =
+ hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -3029,6 +3186,63 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
NT_VIOLATING_MBR_QSL) < 0)
goto err_exit0;
+ /* FLM */
+ if (flm_sdram_calibrate(ndev) < 0)
+ goto err_exit0;
+
+ if (flm_sdram_reset(ndev, 1) < 0)
+ goto err_exit0;
+
+ /* Learn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LDS, 0);
+ /* Learn fail status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LFS, 1);
+ /* Learn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LIS, 1);
+ /* Unlearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UDS, 0);
+ /* Unlearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UIS, 0);
+ /* Relearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RDS, 0);
+ /* Relearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RIS, 0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RBL, 4);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Set the sliding windows size for flm load */
+ uint32_t bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) /
+ (32ULL * ndev->be.flm.nb_rpp_clock_in_ps)) -
+ 1ULL);
+ hw_mod_flm_load_bin_set(&ndev->be, HW_FLM_LOAD_BIN, bin);
+ hw_mod_flm_load_bin_flush(&ndev->be);
+
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT0,
+ 0); /* Drop at 100% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT0, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT1,
+ 14); /* Drop at 87,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT1, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT2,
+ 10); /* Drop at 62,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT2, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT3,
+ 6); /* Drop at 37,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT3, 1);
+ hw_mod_flm_prio_flush(&ndev->be);
+
+ /* TODO How to set and use these limits */
+ for (uint32_t i = 0; i < ndev->be.flm.nb_pst_profiles; ++i) {
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_BP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_PP, i,
+ NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_TP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT);
+ }
+
+ hw_mod_flm_pst_flush(&ndev->be, 0, ALL_ENTRIES);
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -3057,6 +3271,8 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
#endif
if (ndev->flow_mgnt_prepared) {
+ flm_sdram_reset(ndev, 0);
+
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
new file mode 100644
index 0000000000..9e454e4c0f
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -0,0 +1,129 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
+#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+
+/*
+ * Per port configuration for IPv4 fragmentation and DF flag handling
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV4_FRAGMENTATION | IPV4_DF_ACTION || Exceeding MTU | DF flag || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - | - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_DROP || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_FORWARD || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV4_DF_ACTION IPV4_DF_DROP
+
+#define PORT_1_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV4_DF_ACTION IPV4_DF_DROP
+
+
+/*
+ * Per port configuration for IPv6 fragmentation
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV6_FRAGMENTATION | IPV6_ACTION || Exceeding MTU || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DROP || no || Forward ||
+ * || | || yes || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | FRAGMENT || no || Forward ||
+ * || | || yes || Fragment ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV6_ACTION IPV6_DROP
+
+#define PORT_1_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV6_ACTION IPV6_DROP
+
+
+/*
+ * Statistics are generated each time the byte counter crosses a limit.
+ * If BYTE_LIMIT is zero then the byte counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_LIMIT + 15) bytes
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(8 + 15) = 2^23 ~~ 8MB
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT 8
+
+/*
+ * Statistics are generated each time the packet counter crosses a limit.
+ * If PKT_LIMIT is zero then the packet counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(PKT_LIMIT + 11) pkts
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(5 + 11) = 2^16 pkts ~~ 64K pkts
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT 5
+
+/*
+ * Statistics are generated each time flow time (measured in ns) crosses a
+ * limit.
+ * If BYTE_TIMEOUT is zero then the flow time does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_TIMEOUT + 15) ns
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(23 + 15) = 2^38 ns ~~ 275 sec
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT 23
+
+/*
+ * This define sets the percentage of the full processing capacity
+ * being reserved for scan operations. The scanner is responsible
+ * for detecting aged out flows and meters with statistics timeout.
+ *
+ * A high scanner load percentage will make this detection more precise
+ * but will also give lower packet processing capacity.
+ *
+ * The percentage is given as a decimal number, e.g. 0.01 for 1%, which is the recommended value.
+ */
+#define NTNIC_SCANNER_LOAD 0.01
+
+/*
+ * This define sets the timeout resolution of aged flow scanner (scrubber).
+ *
+ * The timeout resolution feature is provided in order to reduce the number of
+ * write-back operations for flows without attached meter. If the resolution
+ * is disabled (set to 0) and flow timeout is enabled via age action, then a write-back
+ * occurs every the flow is evicted from the flow cache, essentially causing the
+ * lookup performance to drop to that of a flow with meter. By setting the timeout
+ * resolution (>0), write-back for flows happens only when the difference between
+ * the last recorded time for the flow and the current time exceeds the chosen resolution.
+ *
+ * The parameter value is a power of 2 in units of 2^28 nanoseconds. It means that value 8 sets
+ * the timeout resolution to: 2^8 * 2^28 / 1e9 = 68,7 seconds
+ *
+ * NOTE: This parameter has a significant impact on flow lookup performance, especially
+ * if full scanner timeout resolution (=0) is configured.
+ */
+#define NTNIC_SCANNER_TIMEOUT_RESOLUTION 8
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 71ecd6c68c..a482fb43ad 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -16,6 +16,14 @@
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
+/*
+ * Windows size in seconds for measuring FLM load
+ * and Port load.
+ * The windows size must max be 3 min in order to
+ * prevent overflow.
+ */
+#define FLM_LOAD_WINDOWS_SIZE 2ULL
+
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
#define PCIIDENT_TO_BUSNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 8) & 0xFFU))
#define PCIIDENT_TO_DEVNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 3) & 0x1FU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 34/73] net/ntnic: add flm rcp module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (32 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 33/73] net/ntnic: add FLM module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
` (39 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 133 ++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 +++++++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 20 ++
.../profile_inline/flow_api_profile_inline.c | 42 +++-
5 files changed, 390 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index de662c4ed1..13722c30a9 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -683,6 +683,10 @@ int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value);
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f5eaea7c4e..0a7e90c04f 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -579,3 +579,136 @@ int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int cou
}
return be->iface->flm_scrub_flush(be->be_dev, &be->flm, start_idx, count);
}
+
+static int hw_mod_flm_rcp_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.rcp[index], (uint8_t)*value,
+ sizeof(struct flm_v25_rcp_s));
+ break;
+
+ case HW_FLM_RCP_LOOKUP:
+ GET_SET(be->flm.v25.rcp[index].lookup, value);
+ break;
+
+ case HW_FLM_RCP_QW0_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW0_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_FLM_RCP_QW0_SEL:
+ GET_SET(be->flm.v25.rcp[index].qw0_sel, value);
+ break;
+
+ case HW_FLM_RCP_QW4_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW4_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw8_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW8_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw8_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_SEL:
+ GET_SET(be->flm.v25.rcp[index].sw8_sel, value);
+ break;
+
+ case HW_FLM_RCP_SW9_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw9_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW9_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw9_ofs, value);
+ break;
+
+ case HW_FLM_RCP_MASK:
+ if (get) {
+ memcpy(value, be->flm.v25.rcp[index].mask,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+
+ } else {
+ memcpy(be->flm.v25.rcp[index].mask, value,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+ }
+
+ break;
+
+ case HW_FLM_RCP_KID:
+ GET_SET(be->flm.v25.rcp[index].kid, value);
+ break;
+
+ case HW_FLM_RCP_OPN:
+ GET_SET(be->flm.v25.rcp[index].opn, value);
+ break;
+
+ case HW_FLM_RCP_IPN:
+ GET_SET(be->flm.v25.rcp[index].ipn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_DYN:
+ GET_SET(be->flm.v25.rcp[index].byt_dyn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_OFS:
+ GET_SET(be->flm.v25.rcp[index].byt_ofs, value);
+ break;
+
+ case HW_FLM_RCP_TXPLM:
+ GET_SET(be->flm.v25.rcp[index].txplm, value);
+ break;
+
+ case HW_FLM_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->flm.v25.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value)
+{
+ if (field != HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, value, 0);
+}
+
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ if (field == HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 61492090ce..0ae058b91e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -68,6 +68,9 @@ struct hw_db_inline_resource_db {
} *cat;
struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_flm_rcp_data data;
+ int ref;
+
struct hw_db_inline_resource_db_flm_ft {
struct hw_db_inline_flm_ft_data data;
struct hw_db_flm_ft idx;
@@ -96,6 +99,7 @@ struct hw_db_inline_resource_db {
uint32_t nb_cat;
uint32_t nb_flm_ft;
+ uint32_t nb_flm_rcp;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -164,6 +168,42 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+
+ db->nb_flm_ft = ndev->be.cat.nb_flow_types;
+ db->nb_flm_rcp = ndev->be.flm.nb_categories;
+ db->flm = calloc(db->nb_flm_rcp, sizeof(struct hw_db_inline_resource_db_flm_rcp));
+
+ if (db->flm == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ db->flm[i].ft =
+ calloc(db->nb_flm_ft, sizeof(struct hw_db_inline_resource_db_flm_ft));
+
+ if (db->flm[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].match_set =
+ calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_flm_match_set));
+
+ if (db->flm[i].match_set == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].cfn_map = calloc(db->nb_cat * db->nb_flm_ft,
+ sizeof(struct hw_db_inline_resource_db_flm_cfn_map));
+
+ if (db->flm[i].cfn_map == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
db->nb_km_ft = ndev->be.cat.nb_flow_types;
db->nb_km_rcp = ndev->be.km.nb_categories;
db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
@@ -222,6 +262,16 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cat);
+ if (db->flm) {
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ free(db->flm[i].ft);
+ free(db->flm[i].match_set);
+ free(db->flm[i].cfn_map);
+ }
+
+ free(db->flm);
+ }
+
if (db->km) {
for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
free(db->km[i].ft);
@@ -268,6 +318,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ hw_db_inline_flm_deref(ndev, db_handle, *(struct hw_db_flm_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_FLM_FT:
hw_db_inline_flm_ft_deref(ndev, db_handle,
*(struct hw_db_flm_ft *)&idxs[i]);
@@ -324,6 +378,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ return &db->flm[idxs[i].id1].data;
+
case HW_DB_IDX_TYPE_FLM_FT:
return NULL; /* FTs can't be easily looked up */
@@ -481,6 +538,20 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
return 0;
}
+static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int flm_rcp)
+{
+ uint32_t flm_mask[10];
+ memset(flm_mask, 0xff, sizeof(flm_mask));
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, flm_rcp, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, flm_rcp, 1);
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, flm_rcp, flm_mask);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, flm_rcp, flm_rcp + 2);
+
+ hw_mod_flm_rcp_flush(&ndev->be, flm_rcp, 1);
+}
+
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1268,10 +1339,17 @@ void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_d
void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
{
(void)ndev;
- (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
if (idx.error)
return;
+
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0, sizeof(struct hw_db_inline_km_rcp_data));
+ db->flm[idx.id1].ref = 0;
+ }
}
/******************************************************************************/
@@ -1359,6 +1437,121 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* FLM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_flm_compare(const struct hw_db_inline_flm_rcp_data *data1,
+ const struct hw_db_inline_flm_rcp_data *data2)
+{
+ if (data1->qw0_dyn != data2->qw0_dyn || data1->qw0_ofs != data2->qw0_ofs ||
+ data1->qw4_dyn != data2->qw4_dyn || data1->qw4_ofs != data2->qw4_ofs ||
+ data1->sw8_dyn != data2->sw8_dyn || data1->sw8_ofs != data2->sw8_ofs ||
+ data1->sw9_dyn != data2->sw9_dyn || data1->sw9_ofs != data2->sw9_ofs ||
+ data1->outer_prot != data2->outer_prot || data1->inner_prot != data2->inner_prot) {
+ return 0;
+ }
+
+ for (int i = 0; i < 10; ++i)
+ if (data1->mask[i] != data2->mask[i])
+ return 0;
+
+ return 1;
+}
+
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_idx idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_RCP;
+ idx.id1 = group;
+
+ if (group == 0)
+ return idx;
+
+ if (db->flm[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_compare(data, &db->flm[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ref(ndev, db, idx);
+ return idx;
+ }
+
+ db->flm[idx.id1].ref = 1;
+ memcpy(&db->flm[idx.id1].data, data, sizeof(struct hw_db_inline_flm_rcp_data));
+
+ {
+ uint32_t flm_mask[10] = {
+ data->mask[0], /* SW9 */
+ data->mask[1], /* SW8 */
+ data->mask[5], data->mask[4], data->mask[3], data->mask[2], /* QW4 */
+ data->mask[9], data->mask[8], data->mask[7], data->mask[6], /* QW0 */
+ };
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, idx.id1, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, idx.id1, 1);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_DYN, idx.id1, data->qw0_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_OFS, idx.id1, data->qw0_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_DYN, idx.id1, data->qw4_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_OFS, idx.id1, data->qw4_ofs);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_DYN, idx.id1, data->sw8_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_OFS, idx.id1, data->sw8_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_DYN, idx.id1, data->sw9_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_OFS, idx.id1, data->sw9_ofs);
+
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, idx.id1, flm_mask);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, idx.id1, idx.id1 + 2);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_OPN, idx.id1, data->outer_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_IPN, idx.id1, data->inner_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_DYN, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_OFS, idx.id1, -20);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_TXPLM, idx.id1, UINT32_MAX);
+
+ hw_mod_flm_rcp_flush(&ndev->be, idx.id1, 1);
+ }
+
+ return idx;
+}
+
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->flm[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ if (idx.id1 > 0) {
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_flm_rcp_data));
+ db->flm[idx.id1].ref = 0;
+
+ hw_db_inline_setup_default_flm_rcp(ndev, idx.id1);
+ }
+ }
+}
+
/******************************************************************************/
/* FLM FT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a520ae1769..9820225ffa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -138,6 +138,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE,
HW_DB_IDX_TYPE_TPE_EXT,
+ HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
@@ -165,6 +166,22 @@ struct hw_db_inline_cat_data {
uint8_t ip_prot_tunnel;
};
+struct hw_db_inline_flm_rcp_data {
+ uint64_t qw0_dyn : 5;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 5;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 5;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 5;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_prot : 1;
+ uint64_t inner_prot : 1;
+ uint64_t padding : 10;
+
+ uint32_t mask[10];
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -300,7 +317,10 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group);
void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_flm_ft_data *data);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5ad2ceb4ca..719f5fcdec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -101,6 +101,11 @@ static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
hw_mod_flm_control_flush(&ndev->be);
+ for (uint32_t i = 1; i < ndev->be.flm.nb_categories; ++i)
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, i, 0x0);
+
+ hw_mod_flm_rcp_flush(&ndev->be, 1, ndev->be.flm.nb_categories - 1);
+
/* Wait for FLM to enter Idle state */
for (uint32_t i = 0; i < 1000000; ++i) {
uint32_t value = 0;
@@ -2658,8 +2663,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port, uint32_t num_queues,
- uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
- struct flm_flow_key_def_s *key_def __rte_unused)
+ uint32_t *packet_data, uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
@@ -2692,6 +2697,31 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
* Flow for group 1..32
*/
+ /* Setup FLM RCP */
+ struct hw_db_inline_flm_rcp_data flm_data = {
+ .qw0_dyn = key_def->qw0_dyn,
+ .qw0_ofs = key_def->qw0_ofs,
+ .qw4_dyn = key_def->qw4_dyn,
+ .qw4_ofs = key_def->qw4_ofs,
+ .sw8_dyn = key_def->sw8_dyn,
+ .sw8_ofs = key_def->sw8_ofs,
+ .sw9_dyn = key_def->sw9_dyn,
+ .sw9_ofs = key_def->sw9_ofs,
+ .outer_prot = key_def->outer_proto,
+ .inner_prot = key_def->inner_proto,
+ };
+ memcpy(flm_data.mask, packet_mask, sizeof(uint32_t) * 10);
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, &flm_data,
+ attr->group);
+ fh->db_idxs[fh->db_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup Actions */
uint16_t flm_rpl_ext_ptr = 0;
uint32_t flm_ft = 0;
@@ -2704,7 +2734,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
/* Program flow */
- convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ convert_fh_to_fh_flm(fh, packet_data, flm_idx.id1 + 2, flm_ft, flm_rpl_ext_ptr,
flm_scrub, attr->priority & 0x3);
flm_flow_programming(fh, NT_FLM_OP_LEARN);
@@ -3276,6 +3306,12 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, 0, 0);
+ hw_mod_flm_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
+ flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 35/73] net/ntnic: add learn flow queue handling
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (33 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
` (38 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements thread for handling flow learn queue
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 5 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 33 +++++++
.../flow_api/profile_inline/flm_lrn_queue.c | 42 +++++++++
.../flow_api/profile_inline/flm_lrn_queue.h | 11 +++
.../profile_inline/flow_api_profile_inline.c | 48 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 94 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 241 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 13722c30a9..17d5755634 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,11 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt);
+
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
struct hsh_func_s {
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8017aa4fc3..8ebdd98db0 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -14,6 +14,7 @@ typedef struct ntdrv_4ga_s {
char *p_drv_name;
volatile bool b_shutdown;
+ rte_thread_t flm_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 0a7e90c04f..f4c29b8bde 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,3 +712,36 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ int ret = 0;
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_LRN_DATA:
+ ret = be->iface->flm_lrn_data_flush(be->be_dev, &be->flm, value, records,
+ handled_records,
+ (sizeof(struct flm_v25_lrn_data_s) /
+ sizeof(uint32_t)),
+ inf_word_cnt, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
index ad7efafe08..6e77c28f93 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -13,8 +13,28 @@
#include "flm_lrn_queue.h"
+#define QUEUE_SIZE (1 << 13)
+
#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+void *flm_lrn_queue_create(void)
+{
+ static_assert((ELEM_SIZE & ~(size_t)3) == ELEM_SIZE, "FLM LEARN struct size");
+ struct rte_ring *q = rte_ring_create_elem("RFQ",
+ ELEM_SIZE,
+ QUEUE_SIZE,
+ SOCKET_ID_ANY,
+ RING_F_MP_HTS_ENQ | RING_F_SC_DEQ);
+ assert(q != NULL);
+ return q;
+}
+
+void flm_lrn_queue_free(void *q)
+{
+ if (q)
+ rte_ring_free(q);
+}
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q)
{
struct rte_ring_zc_data zcd;
@@ -26,3 +46,25 @@ void flm_lrn_queue_release_write_buffer(void *q)
{
rte_ring_enqueue_zc_elem_finish(q, 1);
}
+
+read_record flm_lrn_queue_get_read_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ read_record rr;
+
+ if (rte_ring_dequeue_zc_burst_elem_start(q, ELEM_SIZE, QUEUE_SIZE, &zcd, NULL) != 0) {
+ rr.num = zcd.n1;
+ rr.p = zcd.ptr1;
+
+ } else {
+ rr.num = 0;
+ rr.p = NULL;
+ }
+
+ return rr;
+}
+
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num)
+{
+ rte_ring_dequeue_zc_elem_finish(q, num);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
index 8cee0c8e78..40558f4201 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -8,7 +8,18 @@
#include <stdint.h>
+typedef struct read_record {
+ uint32_t *p;
+ uint32_t num;
+} read_record;
+
+void *flm_lrn_queue_create(void);
+void flm_lrn_queue_free(void *q);
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q);
void flm_lrn_queue_release_write_buffer(void *q);
+read_record flm_lrn_queue_get_read_buffer(void *q);
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num);
+
#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 719f5fcdec..0b8ac26b83 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -39,6 +39,48 @@
static void *flm_lrn_queue_arr;
+static void flm_setup_queues(void)
+{
+ flm_lrn_queue_arr = flm_lrn_queue_create();
+ assert(flm_lrn_queue_arr != NULL);
+}
+
+static void flm_free_queues(void)
+{
+ flm_lrn_queue_free(flm_lrn_queue_arr);
+}
+
+static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ read_record r = flm_lrn_queue_get_read_buffer(flm_lrn_queue_arr);
+
+ if (r.num) {
+ uint32_t handled_records = 0;
+
+ if (hw_mod_flm_lrn_data_set_flush(&dev->ndev->be, HW_FLM_FLOW_LRN_DATA, r.p, r.num,
+ &handled_records, inf_word_cnt, sta_word_cnt)) {
+ NT_LOG(ERR, FILTER, "Flow programming failed");
+
+ } else if (handled_records > 0) {
+ flm_lrn_queue_release_read_buffer(flm_lrn_queue_arr, handled_records);
+ }
+ }
+
+ return r.num;
+}
+
+static uint32_t flm_update(struct flow_eth_dev *dev)
+{
+ static uint32_t inf_word_cnt;
+ static uint32_t sta_word_cnt;
+
+ if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
+ return 1;
+
+ return inf_word_cnt + sta_word_cnt;
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -4219,6 +4261,12 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * NT Flow FLM Meter API
+ */
+ .flm_setup_queues = flm_setup_queues,
+ .flm_free_queues = flm_free_queues,
+ .flm_update = flm_update,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a509a8eb51..bfca8f28b1 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -24,6 +24,11 @@
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
+#define THREAD_JOIN(a) rte_thread_join(a, NULL)
+#define THREAD_FUNC static uint32_t
+#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
@@ -120,6 +125,16 @@ store_pdrv(struct drv_s *p_drv)
rte_spinlock_unlock(&hwlock);
}
+static void clear_pdrv(struct drv_s *p_drv)
+{
+ if (p_drv->adapter_no > NUM_ADAPTER_MAX)
+ return;
+
+ rte_spinlock_lock(&hwlock);
+ _g_p_drv[p_drv->adapter_no] = NULL;
+ rte_spinlock_unlock(&hwlock);
+}
+
static struct drv_s *
get_pdrv_from_pci(struct rte_pci_addr addr)
{
@@ -1240,6 +1255,13 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
static void
drv_deinit(struct drv_s *p_drv)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return;
+ }
+
const struct adapter_ops *adapter_ops = get_adapter_ops();
if (adapter_ops == NULL) {
@@ -1251,6 +1273,22 @@ drv_deinit(struct drv_s *p_drv)
return;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ fpga_info_t *fpga_info = &p_nt_drv->adapter_info.fpga_info;
+
+ /*
+ * Mark the global pdrv for cleared. Used by some threads to terminate.
+ * 1 second to give the threads a chance to see the termonation.
+ */
+ clear_pdrv(p_drv);
+ nt_os_wait_usec(1000000);
+
+ /* stop statistics threads */
+ p_drv->ntdrv.b_shutdown = true;
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ }
/* stop adapter */
adapter_ops->deinit(&p_nt_drv->adapter_info);
@@ -1359,6 +1397,43 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.promiscuous_enable = promiscuous_enable,
};
+/*
+ * Adapter flm stat thread
+ */
+THREAD_FUNC adapter_flm_update_thread_fn(void *context)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: profile_inline module uninitialized", __func__);
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct nt4ga_filter_s *p_nt4ga_filter = &p_adapter_info->nt4ga_filter;
+ struct flow_nic_dev *p_flow_nic_dev = p_nt4ga_filter->mp_flow_device;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: waiting for port configuration",
+ p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (p_flow_nic_dev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ struct flow_eth_dev *dev = p_flow_nic_dev->eth_base;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: begin", p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (!p_drv->ntdrv.b_shutdown)
+ if (profile_inline_ops->flm_update(dev) == 0)
+ nt_os_wait_usec(10);
+
+ NT_LOG(DBG, NTNIC, "%s: %s: end", p_adapter_info->mp_adapter_id_str, __func__);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1369,6 +1444,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* Return statement is not necessary here to allow traffic processing by SW */
}
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1597,6 +1679,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (profile_inline_ops != NULL && fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ profile_inline_ops->flm_setup_queues();
+ res = THREAD_CTRL_CREATE(&p_nt_drv->flm_thread, "ntnic-nt_flm_update_thr",
+ adapter_flm_update_thread_fn, (void *)p_drv);
+
+ if (res) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1069be2f85..27d6cbef01 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -256,6 +256,13 @@ struct profile_inline_ops {
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+
+ /*
+ * NT Flow FLM queue API
+ */
+ void (*flm_setup_queues)(void);
+ void (*flm_free_queues)(void);
+ uint32_t (*flm_update)(struct flow_eth_dev *dev);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 36/73] net/ntnic: match and action db attributes were added
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (34 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
` (37 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements match/action dereferencing
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_hw_db_inline.c | 795 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 35 +
.../profile_inline/flow_api_profile_inline.c | 55 ++
3 files changed, 885 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 0ae058b91e..52f85b65af 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,9 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_INLINE_ACTION_SET_NB 512
+#define HW_DB_INLINE_MATCH_SET_NB 512
+
#define HW_DB_FT_LOOKUP_KEY_A 0
#define HW_DB_FT_TYPE_KM 1
@@ -110,6 +113,20 @@ struct hw_db_inline_resource_db {
int cfn_hw;
int ref;
} *cfn;
+
+ uint32_t cfn_priority_counter;
+ uint32_t set_priority_counter;
+
+ struct hw_db_inline_resource_db_action_set {
+ struct hw_db_inline_action_set_data data;
+ int ref;
+ } action_set[HW_DB_INLINE_ACTION_SET_NB];
+
+ struct hw_db_inline_resource_db_match_set {
+ struct hw_db_inline_match_set_data data;
+ int ref;
+ uint32_t set_priority;
+ } match_set[HW_DB_INLINE_MATCH_SET_NB];
};
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
@@ -292,6 +309,16 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ hw_db_inline_match_set_deref(ndev, db_handle,
+ *(struct hw_db_match_set_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ hw_db_inline_action_set_deref(ndev, db_handle,
+ *(struct hw_db_action_set_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_CAT:
hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
break;
@@ -360,6 +387,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_NONE:
return NULL;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ return &db->match_set[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ return &db->action_set[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_CAT:
return &db->cat[idxs[i].ids].data;
@@ -552,6 +585,763 @@ static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int fl
}
+static void hw_db_copy_ft(struct flow_nic_dev *ndev, int type, int cfn_dst, int cfn_src,
+ int lookup, int flow_type)
+{
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index_dst = (8 * flow_type + cfn_dst / cat_funcs) * max_lookups + lookup;
+ int fte_field_dst = cfn_dst % cat_funcs;
+
+ int fte_index_src = (8 * flow_type + cfn_src / cat_funcs) * max_lookups + lookup;
+ int fte_field_src = cfn_src % cat_funcs;
+
+ uint32_t current_bm_dst = 0;
+ uint32_t current_bm_src = 0;
+ uint32_t fte_field_bm_dst = 1 << fte_field_dst;
+ uint32_t fte_field_bm_src = 1 << fte_field_src;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t enable = current_bm_src & fte_field_bm_src;
+ uint32_t final_bm_dst = enable ? (fte_field_bm_dst | current_bm_dst)
+ : (~fte_field_bm_dst & current_bm_dst);
+
+ if (current_bm_dst != final_bm_dst) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+
+static int hw_db_inline_filter_apply(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id,
+ struct hw_db_match_set_idx match_set_idx,
+ struct hw_db_flm_ft flm_ft_idx,
+ struct hw_db_action_set_idx action_set_idx)
+{
+ (void)match_set_idx;
+ (void)flm_ft_idx;
+
+ const struct hw_db_inline_match_set_data *match_set =
+ &db->match_set[match_set_idx.ids].data;
+ const struct hw_db_inline_cat_data *cat = &db->cat[match_set->cat.ids].data;
+
+ const int km_ft = match_set->km_ft.id1;
+ const int km_rcp = (int)db->km[match_set->km.id1].data.rcp;
+
+ const int flm_ft = flm_ft_idx.id1;
+ const int flm_rcp = flm_ft_idx.id2;
+
+ const struct hw_db_inline_action_set_data *action_set =
+ &db->action_set[action_set_idx.ids].data;
+ const struct hw_db_inline_cot_data *cot = &db->cot[action_set->cot.ids].data;
+
+ const int qsl_hw_id = action_set->qsl.ids;
+ const int slc_lr_hw_id = action_set->slc_lr.ids;
+ const int tpe_hw_id = action_set->tpe.ids;
+ const int hsh_hw_id = action_set->hsh.ids;
+
+ /* Setup default FLM RCP if needed */
+ if (flm_rcp > 0 && db->flm[flm_rcp].ref <= 0)
+ hw_db_inline_setup_default_flm_rcp(ndev, flm_rcp);
+
+ /* Setup CAT.CFN */
+ {
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x0);
+
+ /* Protocol checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_ISL, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_CFP, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MAC, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L2, cat_hw_id, 0, cat->ptc_mask_l2);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VNTAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VLAN, cat_hw_id, 0, cat->vlan_mask);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, cat->ptc_mask_l3);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_FRAG, cat_hw_id, 0,
+ cat->ptc_mask_frag);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_IP_PROT, cat_hw_id, 0, cat->ip_prot);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L4, cat_hw_id, 0, cat->ptc_mask_l4);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TUNNEL, cat_hw_id, 0,
+ cat->ptc_mask_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L2, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_VLAN, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L3, cat_hw_id, 0,
+ cat->ptc_mask_l3_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_FRAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_IP_PROT, cat_hw_id, 0,
+ cat->ip_prot_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L4, cat_hw_id, 0,
+ cat->ptc_mask_l4_tunnel);
+
+ /* Error checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_CV, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_FCS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TRUNC, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L3_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L4_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L3_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L4_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl_tunnel);
+
+ /* MAC port check */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_MAC_PORT, cat_hw_id, 0,
+ cat->mac_port_mask);
+
+ /* Pattern match checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMP, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_DCT, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_EXT_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMB, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_AND_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_OR_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_INV, cat_hw_id, 0, -1);
+
+ /* Length checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC_INV, cat_hw_id, 0, -1);
+
+ /* KM and FLM */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3);
+
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 0, cat_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 0, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 1, hsh_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 2, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 2,
+ slc_lr_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 5, tpe_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 5, 0);
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id,
+ 0x001 | 0x004 | (qsl_hw_id ? 0x008 : 0) |
+ (slc_lr_hw_id ? 0x020 : 0) | 0x040 |
+ (tpe_hw_id ? 0x400 : 0));
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ km_rcp);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ flm_rcp);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, flm_ft, 1);
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COLOR, cat_hw_id, cot->frag_rcp << 10);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_KM, cat_hw_id,
+ cot->matcher_color_contrib);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ return 0;
+}
+
+static void hw_db_inline_filter_clear(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id)
+{
+ /* Setup CAT.CFN */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < 6; ++i) {
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + i, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + i, 0);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0);
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft,
+ 0);
+ }
+ }
+
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+}
+
+static void hw_db_inline_filter_copy(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db, int cfn_dst, int cfn_src)
+{
+ uint32_t val = 0;
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_COPY_FROM, cfn_dst, 0, cfn_src);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < offset; ++i) {
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_dst + i, val);
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_dst + i, val);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cfn_dst, offset);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_get(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_src, &val);
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_dst, val);
+ hw_mod_cat_cte_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_km_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_KM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_flm_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_C, ft);
+ }
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COPY_FROM, cfn_dst, cfn_src);
+ hw_mod_cat_cot_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+}
+
+/*
+ * Algorithm for moving CFN entries to make space with respect of priority.
+ * The algorithm will make the fewest possible moves to fit a new CFN entry.
+ */
+static int hw_db_inline_alloc_prioritized_cfn(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ struct hw_db_match_set_idx match_set_idx)
+{
+ const struct hw_db_inline_resource_db_match_set *match_set =
+ &db->match_set[match_set_idx.ids];
+
+ uint64_t priority = ((uint64_t)(match_set->data.priority & 0xff) << 56) |
+ ((uint64_t)(0xffffff - (match_set->set_priority & 0xffffff)) << 32) |
+ (0xffffffff - ++db->cfn_priority_counter);
+
+ int db_cfn_idx = -1;
+
+ struct {
+ uint64_t priority;
+ uint32_t idx;
+ } sorted_priority[db->nb_cat];
+
+ memset(sorted_priority, 0x0, sizeof(sorted_priority));
+
+ uint32_t in_use_count = 0;
+
+ for (uint32_t i = 1; i < db->nb_cat; ++i) {
+ if (db->cfn[i].ref > 0) {
+ sorted_priority[db->cfn[i].cfn_hw].priority = db->cfn[i].priority;
+ sorted_priority[db->cfn[i].cfn_hw].idx = i;
+ in_use_count += 1;
+
+ } else if (db_cfn_idx == -1) {
+ db_cfn_idx = (int)i;
+ }
+ }
+
+ if (in_use_count >= db->nb_cat - 1)
+ return -1;
+
+ if (in_use_count == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = 1;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ int goal = 1;
+ int free_before = -1000000;
+ int free_after = 1000000;
+ int found_smaller = 0;
+
+ for (int i = 1; i < (int)db->nb_cat; ++i) {
+ if (sorted_priority[i].priority > priority) { /* Bigger */
+ goal = i + 1;
+
+ } else if (sorted_priority[i].priority == 0) { /* Not set */
+ if (found_smaller) {
+ if (free_after > i)
+ free_after = i;
+
+ } else {
+ free_before = i;
+ }
+
+ } else {/* Smaller */
+ found_smaller = 1;
+ }
+ }
+
+ int diff_before = goal - free_before - 1;
+ int diff_after = free_after - goal;
+
+ if (goal < (int)db->nb_cat && sorted_priority[goal].priority == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ if (diff_after <= diff_before) {
+ for (int i = free_after; i > goal; --i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i - 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+
+ } else {
+ goal -= 1;
+
+ for (int i = free_before; i < goal; ++i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i + 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+ }
+
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+
+ return db_cfn_idx;
+}
+
+static void hw_db_inline_free_prioritized_cfn(struct hw_db_inline_resource_db *db, int cfn_hw)
+{
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (db->cfn[i].cfn_hw == cfn_hw) {
+ memset(&db->cfn[i], 0x0, sizeof(struct hw_db_inline_resource_db_cfn));
+ break;
+ }
+ }
+}
+
+static void hw_db_inline_update_active_filters(struct flow_nic_dev *ndev, void *db_handle,
+ int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[group];
+ struct hw_db_inline_resource_db_flm_cfn_map *cell;
+
+ for (uint32_t match_set_idx = 0; match_set_idx < db->nb_cat; ++match_set_idx) {
+ for (uint32_t ft_idx = 0; ft_idx < db->nb_flm_ft; ++ft_idx) {
+ int active = flm_rcp->ft[ft_idx].ref > 0 &&
+ flm_rcp->match_set[match_set_idx].ref > 0;
+ cell = &flm_rcp->cfn_map[match_set_idx * db->nb_flm_ft + ft_idx];
+
+ if (active && cell->cfn_idx == 0) {
+ /* Setup filter */
+ cell->cfn_idx = hw_db_inline_alloc_prioritized_cfn(ndev, db,
+ flm_rcp->match_set[match_set_idx].idx);
+ hw_db_inline_filter_apply(ndev, db, db->cfn[cell->cfn_idx].cfn_hw,
+ flm_rcp->match_set[match_set_idx].idx,
+ flm_rcp->ft[ft_idx].idx,
+ group == 0
+ ? db->match_set[flm_rcp->match_set[match_set_idx]
+ .idx.ids]
+ .data.action_set
+ : flm_rcp->ft[ft_idx].data.action_set);
+ }
+
+ if (!active && cell->cfn_idx > 0) {
+ /* Teardown filter */
+ hw_db_inline_filter_clear(ndev, db, db->cfn[cell->cfn_idx].cfn_hw);
+ hw_db_inline_free_prioritized_cfn(db,
+ db->cfn[cell->cfn_idx].cfn_hw);
+ cell->cfn_idx = 0;
+ }
+ }
+ }
+}
+
+
+/******************************************************************************/
+/* Match set */
+/******************************************************************************/
+
+static int hw_db_inline_match_set_compare(const struct hw_db_inline_match_set_data *data1,
+ const struct hw_db_inline_match_set_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->km_ft.raw == data2->km_ft.raw && data1->jump == data2->jump;
+}
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_match_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_MATCH_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_MATCH_SET_NB; ++i) {
+ if (!found && db->match_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->match_set[i].ref > 0 &&
+ hw_db_inline_match_set_compare(data, &db->match_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_match_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ found = 0;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].ref <= 0) {
+ found = 1;
+ flm_rcp->match_set[i].ref = 1;
+ flm_rcp->match_set[i].idx.raw = idx.raw;
+ break;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->match_set[idx.ids].data, data, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 1;
+ db->match_set[idx.ids].set_priority = ++db->set_priority_counter;
+
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
+ return idx;
+}
+
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->match_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+ int jump;
+
+ if (idx.error)
+ return;
+
+ db->match_set[idx.ids].ref -= 1;
+
+ if (db->match_set[idx.ids].ref > 0)
+ return;
+
+ jump = db->match_set[idx.ids].data.jump;
+ flm_rcp = &db->flm[jump];
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].idx.raw == idx.raw) {
+ flm_rcp->match_set[i].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, jump);
+ memset(&flm_rcp->match_set[i], 0x0,
+ sizeof(struct hw_db_inline_resource_db_flm_match_set));
+ }
+ }
+
+ memset(&db->match_set[idx.ids].data, 0x0, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 0;
+}
+
+/******************************************************************************/
+/* Action set */
+/******************************************************************************/
+
+static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_data *data1,
+ const struct hw_db_inline_action_set_data *data2)
+{
+ if (data1->contains_jump)
+ return data2->contains_jump && data1->jump == data2->jump;
+
+ return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
+ data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
+ data1->hsh.raw == data2->hsh.raw;
+}
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_action_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_ACTION_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_ACTION_SET_NB; ++i) {
+ if (!found && db->action_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->action_set[i].ref > 0 &&
+ hw_db_inline_action_set_compare(data, &db->action_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_action_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->action_set[idx.ids].data, data, sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->action_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->action_set[idx.ids].ref -= 1;
+
+ if (db->action_set[idx.ids].ref <= 0) {
+ memset(&db->action_set[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1593,6 +2383,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
return idx;
}
@@ -1647,6 +2439,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->group);
+
return idx;
}
@@ -1677,6 +2471,7 @@ void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struc
return;
flm_rcp->ft[idx.id1].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, idx.id2);
memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 9820225ffa..33de674b72 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -131,6 +131,10 @@ struct hw_db_hsh_idx {
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
+
+ HW_DB_IDX_TYPE_MATCH_SET,
+ HW_DB_IDX_TYPE_ACTION_SET,
+
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
@@ -145,6 +149,17 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_HSH,
};
+/* Container types */
+struct hw_db_inline_match_set_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_km_ft km_ft;
+ struct hw_db_action_set_idx action_set;
+ int jump;
+
+ uint8_t priority;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -224,6 +239,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
@@ -262,6 +278,25 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data);
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data);
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+
+/**/
+
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_cot_data *data);
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0b8ac26b83..ac29c59f26 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2678,10 +2678,30 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup Action Set */
+ struct hw_db_inline_action_set_data action_set_data = {
+ .contains_jump = 0,
+ .cot = cot_idx,
+ .qsl = qsl_idx,
+ .slc_lr = slc_lr_idx,
+ .tpe = tpe_idx,
+ .hsh = hsh_idx,
+ };
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
+ local_idxs[(*local_idx_counter)++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 0,
.group = group,
+ .action_set = action_set_idx,
};
struct hw_db_flm_ft flm_ft_idx = empty_pattern
? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
@@ -2868,6 +2888,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
}
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &action_set_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup CAT */
struct hw_db_inline_cat_data cat_data = {
.vlan_mask = (0xf << fd->vlans) & 0xf,
@@ -2987,6 +3019,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
struct hw_db_inline_km_ft_data km_ft_data = {
.cat = cat_idx,
.km = km_idx,
+ .action_set = action_set_idx,
};
struct hw_db_km_ft km_ft_idx =
hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
@@ -3023,10 +3056,32 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup Match Set */
+ struct hw_db_inline_match_set_data match_set_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ .km_ft = km_ft_idx,
+ .action_set = action_set_idx,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .priority = attr->priority & 0xff,
+ };
+ struct hw_db_match_set_idx match_set_idx =
+ hw_db_inline_match_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &match_set_data);
+ fh->db_idxs[fh->db_idx_counter++] = match_set_idx.raw;
+
+ if (match_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Match Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 1,
.jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .action_set = action_set_idx,
+
};
struct hw_db_flm_ft flm_ft_idx =
hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 37/73] net/ntnic: add flow dump feature
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (35 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 38/73] net/ntnic: add flow flush Serhii Iliushyk
` (36 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add posibilyty to dump flow in human readable format
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 17 ++
.../profile_inline/flow_api_hw_db_inline.c | 264 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 3 +
.../profile_inline/flow_api_profile_inline.c | 81 ++++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 29 ++
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
8 files changed, 413 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index e52363f04e..155a9e1fd6 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -281,6 +281,8 @@ struct flow_handle {
struct flow_handle *next;
struct flow_handle *prev;
+ /* Flow specific pointer to application data stored during action creation. */
+ void *context;
void *user_data;
union {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 043e4244fc..7f1e311988 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1006,6 +1006,22 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
return 0;
}
+static int flow_dev_dump(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_dev_dump_profile_inline(dev, flow, caller_id, file, error);
+}
+
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf)
{
@@ -1031,6 +1047,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_dev_dump = flow_dev_dump,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 52f85b65af..b5fee67e67 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -372,6 +372,270 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ char str_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(str_buffer);
+
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_MATCH_SET: {
+ const struct hw_db_inline_match_set_data *data =
+ &db->match_set[idxs[i].ids].data;
+ fprintf(file, " MATCH_SET %d, priority %d\n", idxs[i].ids,
+ (int)data->priority);
+ fprintf(file, " CAT id %d, KM id %d, KM_FT id %d, ACTION_SET id %d\n",
+ data->cat.ids, data->km.id1, data->km_ft.id1,
+ data->action_set.ids);
+
+ if (data->jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_ACTION_SET: {
+ const struct hw_db_inline_action_set_data *data =
+ &db->action_set[idxs[i].ids].data;
+ fprintf(file, " ACTION_SET %d\n", idxs[i].ids);
+
+ if (data->contains_jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ else
+ fprintf(file,
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ data->cot.ids, data->qsl.ids, data->slc_lr.ids,
+ data->tpe.ids, data->hsh.ids);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_CAT: {
+ const struct hw_db_inline_cat_data *data = &db->cat[idxs[i].ids].data;
+ fprintf(file, " CAT %d\n", idxs[i].ids);
+ fprintf(file, " Port msk 0x%02x, VLAN msk 0x%02x\n",
+ (int)data->mac_port_mask, (int)data->vlan_mask);
+ fprintf(file,
+ " Proto msks: Frag 0x%02x, l2 0x%02x, l3 0x%02x, l4 0x%02x, l3t 0x%02x, l4t 0x%02x\n",
+ (int)data->ptc_mask_frag, (int)data->ptc_mask_l2,
+ (int)data->ptc_mask_l3, (int)data->ptc_mask_l4,
+ (int)data->ptc_mask_l3_tunnel, (int)data->ptc_mask_l4_tunnel);
+ fprintf(file, " IP protocol: pn %u pnt %u\n", data->ip_prot,
+ data->ip_prot_tunnel);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_QSL: {
+ const struct hw_db_inline_qsl_data *data = &db->qsl[idxs[i].ids].data;
+ fprintf(file, " QSL %d\n", idxs[i].ids);
+
+ if (data->discard) {
+ fprintf(file, " Discard\n");
+ break;
+ }
+
+ if (data->drop) {
+ fprintf(file, " Drop\n");
+ break;
+ }
+
+ fprintf(file, " Table size %d\n", data->table_size);
+
+ for (uint32_t i = 0;
+ i < data->table_size && i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ fprintf(file, " %u: Queue %d, TX port %d\n", i,
+ (data->table[i].queue_en ? (int)data->table[i].queue : -1),
+ (data->table[i].tx_port_en ? (int)data->table[i].tx_port
+ : -1));
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_COT: {
+ const struct hw_db_inline_cot_data *data = &db->cot[idxs[i].ids].data;
+ fprintf(file, " COT %d\n", idxs[i].ids);
+ fprintf(file, " Color contrib %d, frag rcp %d\n",
+ (int)data->matcher_color_contrib, (int)data->frag_rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_SLC_LR: {
+ const struct hw_db_inline_slc_lr_data *data =
+ &db->slc_lr[idxs[i].ids].data;
+ fprintf(file, " SLC_LR %d\n", idxs[i].ids);
+ fprintf(file, " Enable %u, dyn %u, ofs %u\n", data->head_slice_en,
+ data->head_slice_dyn, data->head_slice_ofs);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE: {
+ const struct hw_db_inline_tpe_data *data = &db->tpe[idxs[i].ids].data;
+ fprintf(file, " TPE %d\n", idxs[i].ids);
+ fprintf(file, " Insert len %u, new outer %u, calc eth %u\n",
+ data->insert_len, data->new_outer,
+ data->calc_eth_type_from_inner_ip);
+ fprintf(file, " TTL enable %u, dyn %u, ofs %u\n", data->ttl_en,
+ data->ttl_dyn, data->ttl_ofs);
+ fprintf(file,
+ " Len A enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_a_en, data->len_a_pos_dyn, data->len_a_pos_ofs,
+ data->len_a_add_dyn, data->len_a_add_ofs, data->len_a_sub_dyn);
+ fprintf(file,
+ " Len B enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_b_en, data->len_b_pos_dyn, data->len_b_pos_ofs,
+ data->len_b_add_dyn, data->len_b_add_ofs, data->len_b_sub_dyn);
+ fprintf(file,
+ " Len C enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_c_en, data->len_c_pos_dyn, data->len_c_pos_ofs,
+ data->len_c_add_dyn, data->len_c_add_ofs, data->len_c_sub_dyn);
+
+ for (uint32_t i = 0; i < 6; ++i)
+ if (data->writer[i].en)
+ fprintf(file,
+ " Writer %i: Reader %u, dyn %u, ofs %u, len %u\n",
+ i, data->writer[i].reader_select,
+ data->writer[i].dyn, data->writer[i].ofs,
+ data->writer[i].len);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE_EXT: {
+ const struct hw_db_inline_tpe_ext_data *data =
+ &db->tpe_ext[idxs[i].ids].data;
+ const int rpl_rpl_length = ((int)data->size + 15) / 16;
+ fprintf(file, " TPE_EXT %d\n", idxs[i].ids);
+ fprintf(file, " Encap data, size %u\n", data->size);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ fprintf(file, " ");
+
+ for (int n = 15; n >= 0; --n)
+ fprintf(file, " %02x%s", data->hdr8[i * 16 + n],
+ n == 8 ? " " : "");
+
+ fprintf(file, "\n");
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_RCP: {
+ const struct hw_db_inline_flm_rcp_data *data = &db->flm[idxs[i].id1].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " QW0 dyn %u, ofs %u, QW4 dyn %u, ofs %u\n",
+ data->qw0_dyn, data->qw0_ofs, data->qw4_dyn, data->qw4_ofs);
+ fprintf(file, " SW8 dyn %u, ofs %u, SW9 dyn %u, ofs %u\n",
+ data->sw8_dyn, data->sw8_ofs, data->sw9_dyn, data->sw9_ofs);
+ fprintf(file, " Outer prot %u, inner prot %u\n", data->outer_prot,
+ data->inner_prot);
+ fprintf(file, " Mask:\n");
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[0],
+ data->mask[1], data->mask[2], data->mask[3], data->mask[4]);
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[5],
+ data->mask[6], data->mask[7], data->mask[8], data->mask[9]);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_FT: {
+ const struct hw_db_inline_flm_ft_data *data =
+ &db->flm[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " FLM_FT %d\n", idxs[i].id1);
+
+ if (data->is_group_zero)
+ fprintf(file, " Jump to %d\n", data->jump);
+
+ else
+ fprintf(file, " Group %d\n", data->group);
+
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_RCP: {
+ const struct hw_db_inline_km_rcp_data *data = &db->km[idxs[i].id1].data;
+ fprintf(file, " KM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " HW id %u\n", data->rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_FT: {
+ const struct hw_db_inline_km_ft_data *data =
+ &db->km[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " KM_FT %d\n", idxs[i].id1);
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ fprintf(file, " KM_RCP id %d\n", data->km.ids);
+ fprintf(file, " CAT id %d\n", data->cat.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_HSH: {
+ const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
+ fprintf(file, " HSH %d\n", idxs[i].ids);
+
+ switch (data->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ fprintf(file, " Func: NTH10\n");
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ fprintf(file, " Func: Toeplitz\n");
+ fprintf(file, " Key:");
+
+ for (uint8_t i = 0; i < MAX_RSS_KEY_LEN; i++) {
+ if (i % 10 == 0)
+ fprintf(file, "\n ");
+
+ fprintf(file, " %02x", data->key[i]);
+ }
+
+ fprintf(file, "\n");
+ break;
+
+ default:
+ fprintf(file, " Func: %u\n", data->func);
+ }
+
+ fprintf(file, " Hash mask hex:\n");
+ fprintf(file, " %016lx\n", data->hash_mask);
+
+ /* convert hash mask to human readable RTE_ETH_RSS_* form if possible */
+ if (sprint_nt_rss_mask(str_buffer, rss_buffer_len, "\n ",
+ data->hash_mask) == 0) {
+ fprintf(file, " Hash mask flags:%s\n", str_buffer);
+ }
+
+ break;
+ }
+
+ default: {
+ fprintf(file, " Unknown item. Type %u\n", idxs[i].type);
+ break;
+ }
+ }
+ }
+}
+
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ fprintf(file, "CFN status:\n");
+
+ for (uint32_t id = 0; id < db->nb_cat; ++id)
+ if (db->cfn[id].cfn_hw)
+ fprintf(file, " ID %d, HW id %d, priority 0x%" PRIx64 "\n", (int)id,
+ db->cfn[id].cfn_hw, db->cfn[id].priority);
+}
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 33de674b72..a9d31c86ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -276,6 +276,9 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file);
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
/**/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index ac29c59f26..e47ef37c6b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4301,6 +4301,86 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
return res;
}
+static void dump_flm_data(const uint32_t *data, FILE *file)
+{
+ for (unsigned int i = 0; i < 10; ++i) {
+ fprintf(file, "%s%02X %02X %02X %02X%s", i % 2 ? "" : " ",
+ (data[i] >> 24) & 0xff, (data[i] >> 16) & 0xff, (data[i] >> 8) & 0xff,
+ data[i] & 0xff, i % 2 ? "\n" : " ");
+ }
+}
+
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ if (flow != NULL) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLM) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+
+ } else {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs, flow->db_idx_counter,
+ file);
+ }
+
+ } else {
+ int max_flm_count = 1000;
+
+ hw_db_inline_dump_cfn(dev->ndev, dev->ndev->hw_db_handle, file);
+
+ flow = dev->ndev->flow_base;
+
+ while (flow) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs,
+ flow->db_idx_counter, file);
+ }
+
+ flow = flow->next;
+ }
+
+ flow = dev->ndev->flow_base_flm;
+
+ while (flow && max_flm_count >= 0) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+ max_flm_count -= 1;
+ }
+
+ flow = flow->next;
+ }
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
static const struct profile_inline_ops ops = {
/*
@@ -4309,6 +4389,7 @@ static const struct profile_inline_ops ops = {
.done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
.initialize_flow_management_of_ndev_profile_inline =
initialize_flow_management_of_ndev_profile_inline,
+ .flow_dev_dump_profile_inline = flow_dev_dump_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e623bb2352..2c76a2c023 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index df391b6399..5505198148 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,9 +569,38 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: flow_filter module uninitialized", __func__);
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_dev_dump(internals->flw_dev,
+ is_flow_handle_typecast(flow) ? (void *)flow
+ : flow->flw_hdl,
+ caller_id, file, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .dev_dump = eth_flow_dev_dump,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 27d6cbef01..cef655c5e0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,12 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -284,6 +290,11 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ int (*flow_dev_dump)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
/*
* NT Flow API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 38/73] net/ntnic: add flow flush
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (36 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
` (35 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Implements flow flush API
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 13 ++++++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 4 ++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 38 ++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +++
5 files changed, 105 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 7f1e311988..34f2cad2cd 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -253,6 +253,18 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
+static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
+}
+
/*
* Device Management API
*/
@@ -1047,6 +1059,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e47ef37c6b..1dfd96eaac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3636,6 +3636,48 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ /*
+ * Delete all created FLM flows from this eth device.
+ * FLM flows must be deleted first because normal flows are their parents.
+ */
+ struct flow_handle *flow = dev->ndev->flow_base_flm;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ /* Delete all created flows from this eth device */
+ flow = dev->ndev->flow_base;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ return err;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -4396,6 +4438,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
* NT Flow FLM Meter API
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 2c76a2c023..c695842077 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,10 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 5505198148..87b26bd315 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,6 +569,43 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ int res = 0;
+ /* Main application caller_id is port_id shifted above VDPA ports */
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (internals->flw_dev) {
+ res = flow_filter_ops->flow_flush(internals->flw_dev, caller_id, &flow_error);
+ rte_spinlock_lock(&flow_lock);
+
+ for (int flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used && nt_flows[flow].caller_id == caller_id) {
+ /* Cleanup recorded flows */
+ nt_flows[flow].used = 0;
+ nt_flows[flow].caller_id = 0;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -600,6 +637,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index cef655c5e0..12baa13800 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,10 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_flush_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -309,6 +313,9 @@ struct flow_filter_ops {
int (*flow_destroy)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 39/73] net/ntnic: add GMF (Generic MAC Feeder) module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (37 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 38/73] net/ntnic: add flow flush Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
` (34 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The Generic MAC Feeder module provides a way to feed data
to the MAC modules directly from the FPGA,
rather than from host or physical ports.
The use case for this is as a test tool and is not used by NTNIC.
This module is requireqd for correct initialization
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 ++
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +++++++++
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 ++++++++++++++++++
5 files changed, 207 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
diff --git a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
index 8964458b47..d8e0cad7cd 100644
--- a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
+++ b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
@@ -404,6 +404,14 @@ static int _port_init(adapter_info_t *drv, nthw_fpga_t *fpga, int port)
_enable_tx(drv, mac_pcs);
_reset_rx(drv, mac_pcs);
+ /* 2.2) Nt4gaPort::setup() */
+ if (nthw_gmf_init(NULL, fpga, port) == 0) {
+ nthw_gmf_t gmf;
+
+ if (nthw_gmf_init(&gmf, fpga, port) == 0)
+ nthw_gmf_set_enable(&gmf, true);
+ }
+
/* Phase 3. Link state machine steps */
/* 3.1) Create NIM, ::createNim() */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d7e6d05556..92167d24e4 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -38,6 +38,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst9563.c',
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
+ 'nthw/core/nthw_gmf.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_core.h b/drivers/net/ntnic/nthw/core/include/nthw_core.h
index fe32891712..4073f9632c 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_core.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_core.h
@@ -17,6 +17,7 @@
#include "nthw_iic.h"
#include "nthw_i2cm.h"
+#include "nthw_gmf.h"
#include "nthw_gpio_phy.h"
#include "nthw_mac_pcs.h"
#include "nthw_sdc.h"
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_gmf.h b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
new file mode 100644
index 0000000000..cc5be85154
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_GMF_H__
+#define __NTHW_GMF_H__
+
+struct nthw_gmf {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_gmf;
+ int mn_instance;
+
+ nthw_register_t *mp_ctrl;
+ nthw_field_t *mp_ctrl_enable;
+ nthw_field_t *mp_ctrl_ifg_enable;
+ nthw_field_t *mp_ctrl_ifg_tx_now_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock;
+ nthw_field_t *mp_ctrl_ifg_auto_adjust_enable;
+ nthw_field_t *mp_ctrl_ts_inject_always;
+ nthw_field_t *mp_ctrl_fcs_always;
+
+ nthw_register_t *mp_speed;
+ nthw_field_t *mp_speed_ifg_speed;
+
+ nthw_register_t *mp_ifg_clock_delta;
+ nthw_field_t *mp_ifg_clock_delta_delta;
+
+ nthw_register_t *mp_ifg_clock_delta_adjust;
+ nthw_field_t *mp_ifg_clock_delta_adjust_delta;
+
+ nthw_register_t *mp_ifg_max_adjust_slack;
+ nthw_field_t *mp_ifg_max_adjust_slack_slack;
+
+ nthw_register_t *mp_debug_lane_marker;
+ nthw_field_t *mp_debug_lane_marker_compensation;
+
+ nthw_register_t *mp_stat_sticky;
+ nthw_field_t *mp_stat_sticky_data_underflowed;
+ nthw_field_t *mp_stat_sticky_ifg_adjusted;
+
+ nthw_register_t *mp_stat_next_pkt;
+ nthw_field_t *mp_stat_next_pkt_ns;
+
+ nthw_register_t *mp_stat_max_delayed_pkt;
+ nthw_field_t *mp_stat_max_delayed_pkt_ns;
+
+ nthw_register_t *mp_ts_inject;
+ nthw_field_t *mp_ts_inject_offset;
+ nthw_field_t *mp_ts_inject_pos;
+ int mn_param_gmf_ifg_speed_mul;
+ int mn_param_gmf_ifg_speed_div;
+
+ bool m_administrative_block; /* Used to enforce license expiry */
+};
+
+typedef struct nthw_gmf nthw_gmf_t;
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable);
+
+#endif /* __NTHW_GMF_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_gmf.c b/drivers/net/ntnic/nthw/core/nthw_gmf.c
new file mode 100644
index 0000000000..16a4c288bd
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_gmf.c
@@ -0,0 +1,133 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <limits.h>
+#include <math.h>
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_gmf.h"
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_GMF, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: GMF %d: no such instance",
+ p_fpga->p_fpga_info->mp_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_gmf = mod;
+
+ p->mp_ctrl = nthw_module_get_register(p->mp_mod_gmf, GMF_CTRL);
+ p->mp_ctrl_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_ENABLE);
+ p->mp_ctrl_ifg_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_ENABLE);
+ p->mp_ctrl_ifg_auto_adjust_enable =
+ nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_AUTO_ADJUST_ENABLE);
+ p->mp_ctrl_ts_inject_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_TS_INJECT_ALWAYS);
+ p->mp_ctrl_fcs_always = nthw_register_query_field(p->mp_ctrl, GMF_CTRL_FCS_ALWAYS);
+
+ p->mp_speed = nthw_module_get_register(p->mp_mod_gmf, GMF_SPEED);
+ p->mp_speed_ifg_speed = nthw_register_get_field(p->mp_speed, GMF_SPEED_IFG_SPEED);
+
+ p->mp_ifg_clock_delta = nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA);
+ p->mp_ifg_clock_delta_delta =
+ nthw_register_get_field(p->mp_ifg_clock_delta, GMF_IFG_SET_CLOCK_DELTA_DELTA);
+
+ p->mp_ifg_max_adjust_slack =
+ nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_MAX_ADJUST_SLACK);
+ p->mp_ifg_max_adjust_slack_slack = nthw_register_get_field(p->mp_ifg_max_adjust_slack,
+ GMF_IFG_MAX_ADJUST_SLACK_SLACK);
+
+ p->mp_debug_lane_marker = nthw_module_get_register(p->mp_mod_gmf, GMF_DEBUG_LANE_MARKER);
+ p->mp_debug_lane_marker_compensation =
+ nthw_register_get_field(p->mp_debug_lane_marker,
+ GMF_DEBUG_LANE_MARKER_COMPENSATION);
+
+ p->mp_stat_sticky = nthw_module_get_register(p->mp_mod_gmf, GMF_STAT_STICKY);
+ p->mp_stat_sticky_data_underflowed =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_DATA_UNDERFLOWED);
+ p->mp_stat_sticky_ifg_adjusted =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_IFG_ADJUSTED);
+
+ p->mn_param_gmf_ifg_speed_mul =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_MUL, 1);
+ p->mn_param_gmf_ifg_speed_div =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_DIV, 1);
+
+ p->m_administrative_block = false;
+
+ p->mp_stat_next_pkt = nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_NEXT_PKT);
+
+ if (p->mp_stat_next_pkt) {
+ p->mp_stat_next_pkt_ns =
+ nthw_register_query_field(p->mp_stat_next_pkt, GMF_STAT_NEXT_PKT_NS);
+
+ } else {
+ p->mp_stat_next_pkt_ns = NULL;
+ }
+
+ p->mp_stat_max_delayed_pkt =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_MAX_DELAYED_PKT);
+
+ if (p->mp_stat_max_delayed_pkt) {
+ p->mp_stat_max_delayed_pkt_ns =
+ nthw_register_query_field(p->mp_stat_max_delayed_pkt,
+ GMF_STAT_MAX_DELAYED_PKT_NS);
+
+ } else {
+ p->mp_stat_max_delayed_pkt_ns = NULL;
+ }
+
+ p->mp_ctrl_ifg_tx_now_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_NOW_ALWAYS);
+ p->mp_ctrl_ifg_tx_on_ts_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ALWAYS);
+
+ p->mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ADJUST_ON_SET_CLOCK);
+
+ p->mp_ifg_clock_delta_adjust =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA_ADJUST);
+
+ if (p->mp_ifg_clock_delta_adjust) {
+ p->mp_ifg_clock_delta_adjust_delta =
+ nthw_register_query_field(p->mp_ifg_clock_delta_adjust,
+ GMF_IFG_SET_CLOCK_DELTA_ADJUST_DELTA);
+
+ } else {
+ p->mp_ifg_clock_delta_adjust_delta = NULL;
+ }
+
+ p->mp_ts_inject = nthw_module_query_register(p->mp_mod_gmf, GMF_TS_INJECT);
+
+ if (p->mp_ts_inject) {
+ p->mp_ts_inject_offset =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_OFFSET);
+ p->mp_ts_inject_pos =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_POS);
+
+ } else {
+ p->mp_ts_inject_offset = NULL;
+ p->mp_ts_inject_pos = NULL;
+ }
+
+ return 0;
+}
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable)
+{
+ if (!p->m_administrative_block)
+ nthw_field_set_val_flush32(p->mp_ctrl_enable, enable ? 1 : 0);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 40/73] net/ntnic: sort FPGA registers alphanumerically
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (38 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
` (33 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Beatification commit. It is required for pretty supporting different FPGA
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 364 +++++++++---------
1 file changed, 182 insertions(+), 182 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 6df7208649..e076697a92 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,187 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
+ { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
+ { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
+ { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
+ { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
+ { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
+ { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
+ { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
+ { DBS_RX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
+ { DBS_RX_INIT_BUSY, 1, 8, 0 },
+ { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
+ { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
+ { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
+ { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
+ { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
+ { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
+ { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
+ { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
+ { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
+ { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
+ { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
+ { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
+ { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
+ { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
+ { DBS_TX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
+ { DBS_TX_INIT_BUSY, 1, 8, 0 },
+ { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
+ { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
+ { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
+ { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
+ { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
+ { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
+ { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
+ { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
+ { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
+ { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
+ { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
+ { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
+ { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
+ { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
+ { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_register_init_s dbs_registers[] = {
+ { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
+ { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
+ { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
+ { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
+ { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
+ { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
+ { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
+ { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
+ { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
+ { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
+ { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
+ { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
+ { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
+ { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
+ { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
+ { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
+ { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
+ { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
+ { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
+ { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
+ { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
+ { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
+ { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
+ { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
+ { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
+ { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
+ { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1541,192 +1722,11 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
-static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
- { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
- { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
- { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
- { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
- { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
- { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
- { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
- { DBS_RX_IDLE_BUSY, 1, 8, 0 },
- { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
- { DBS_RX_INIT_BUSY, 1, 8, 0 },
- { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
- { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
- { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
- { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
- { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
- { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
- { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
- { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
- { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
- { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
- { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
- { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
- { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
- { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
- { DBS_TX_IDLE_BUSY, 1, 8, 0 },
- { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
- { DBS_TX_INIT_BUSY, 1, 8, 0 },
- { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
- { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
- { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
- { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
- { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
- { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
- { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
- { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
- { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
- { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
- { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
- { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
- { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
- { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
- { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_register_init_s dbs_registers[] = {
- { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
- { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
- { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
- { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
- { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
- { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
- { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
- { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
- { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
- { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
- { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
- { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
- { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
- { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
- { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
- { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
- { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
- { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
- { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
- { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
- { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
- { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
- { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
- { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
- { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
- { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
- { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
-};
-
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
- { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers},
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
{
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 41/73] net/ntnic: add MOD CSU
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (39 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
` (32 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Checksum Update module updates the checksums of packets
that has been modified in any way.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index e076697a92..efa7b306bc 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,23 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
+ { CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s csu_rcp_data_fields[] = {
+ { CSU_RCP_DATA_IL3_CMD, 2, 5, 0x0000 },
+ { CSU_RCP_DATA_IL4_CMD, 3, 7, 0x0000 },
+ { CSU_RCP_DATA_OL3_CMD, 2, 0, 0x0000 },
+ { CSU_RCP_DATA_OL4_CMD, 3, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s csu_registers[] = {
+ { CSU_RCP_CTRL, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, csu_rcp_ctrl_fields },
+ { CSU_RCP_DATA, 2, 10, NTHW_FPGA_REG_TYPE_WO, 0, 4, csu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
{ DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
{ DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
@@ -1724,6 +1741,7 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
@@ -1919,5 +1937,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 22, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 42/73] net/ntnic: add MOD FLM
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (40 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
@ 2024-10-22 16:54 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 43/73] net/ntnic: add HFU module Serhii Iliushyk
` (31 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:54 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup and
programming engine which supported exact match lookup in line-rate
of up to hundreds of millions of flows.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 286 +++++++++++++++++-
1 file changed, 284 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efa7b306bc..739cabfb1c 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -468,6 +468,288 @@ static nthw_fpga_register_init_s dbs_registers[] = {
{ DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
};
+static nthw_fpga_field_init_s flm_buf_ctrl_fields[] = {
+ { FLM_BUF_CTRL_INF_AVAIL, 16, 16, 0x0000 },
+ { FLM_BUF_CTRL_LRN_FREE, 16, 0, 0x0000 },
+ { FLM_BUF_CTRL_STA_AVAIL, 16, 32, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_control_fields[] = {
+ { FLM_CONTROL_CALIB_RECALIBRATE, 3, 28, 0 },
+ { FLM_CONTROL_CRCRD, 1, 12, 0x0000 },
+ { FLM_CONTROL_CRCWR, 1, 11, 0x0000 },
+ { FLM_CONTROL_EAB, 5, 18, 0 },
+ { FLM_CONTROL_ENABLE, 1, 0, 0 },
+ { FLM_CONTROL_INIT, 1, 1, 0x0000 },
+ { FLM_CONTROL_LDS, 1, 2, 0x0000 },
+ { FLM_CONTROL_LFS, 1, 3, 0x0000 },
+ { FLM_CONTROL_LIS, 1, 4, 0x0000 },
+ { FLM_CONTROL_PDS, 1, 9, 0x0000 },
+ { FLM_CONTROL_PIS, 1, 10, 0x0000 },
+ { FLM_CONTROL_RBL, 4, 13, 0 },
+ { FLM_CONTROL_RDS, 1, 7, 0x0000 },
+ { FLM_CONTROL_RIS, 1, 8, 0x0000 },
+ { FLM_CONTROL_SPLIT_SDRAM_USAGE, 5, 23, 16 },
+ { FLM_CONTROL_UDS, 1, 5, 0x0000 },
+ { FLM_CONTROL_UIS, 1, 6, 0x0000 },
+ { FLM_CONTROL_WPD, 1, 17, 0 },
+};
+
+static nthw_fpga_field_init_s flm_inf_data_fields[] = {
+ { FLM_INF_DATA_BYTES, 64, 0, 0x0000 }, { FLM_INF_DATA_CAUSE, 3, 224, 0x0000 },
+ { FLM_INF_DATA_EOR, 1, 287, 0x0000 }, { FLM_INF_DATA_ID, 32, 192, 0x0000 },
+ { FLM_INF_DATA_PACKETS, 64, 64, 0x0000 }, { FLM_INF_DATA_TS, 64, 128, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_aps_fields[] = {
+ { FLM_LOAD_APS_APS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_bin_fields[] = {
+ { FLM_LOAD_BIN_BIN, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_lps_fields[] = {
+ { FLM_LOAD_LPS_LPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
+ { FLM_LRN_DATA_ADJ, 32, 480, 0x0000 }, { FLM_LRN_DATA_COLOR, 32, 448, 0x0000 },
+ { FLM_LRN_DATA_DSCP, 6, 698, 0x0000 }, { FLM_LRN_DATA_ENT, 1, 693, 0x0000 },
+ { FLM_LRN_DATA_EOR, 1, 767, 0x0000 }, { FLM_LRN_DATA_FILL, 16, 544, 0x0000 },
+ { FLM_LRN_DATA_FT, 4, 560, 0x0000 }, { FLM_LRN_DATA_FT_MBR, 4, 564, 0x0000 },
+ { FLM_LRN_DATA_FT_MISS, 4, 568, 0x0000 }, { FLM_LRN_DATA_ID, 32, 512, 0x0000 },
+ { FLM_LRN_DATA_KID, 8, 328, 0x0000 }, { FLM_LRN_DATA_MBR_ID1, 28, 572, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID2, 28, 600, 0x0000 }, { FLM_LRN_DATA_MBR_ID3, 28, 628, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID4, 28, 656, 0x0000 }, { FLM_LRN_DATA_NAT_EN, 1, 711, 0x0000 },
+ { FLM_LRN_DATA_NAT_IP, 32, 336, 0x0000 }, { FLM_LRN_DATA_NAT_PORT, 16, 400, 0x0000 },
+ { FLM_LRN_DATA_NOFI, 1, 716, 0x0000 }, { FLM_LRN_DATA_OP, 4, 694, 0x0000 },
+ { FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
+ { FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
+ { FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
+ { FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
+ { FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_prio_fields[] = {
+ { FLM_PRIO_FT0, 4, 4, 1 }, { FLM_PRIO_FT1, 4, 12, 1 }, { FLM_PRIO_FT2, 4, 20, 1 },
+ { FLM_PRIO_FT3, 4, 28, 1 }, { FLM_PRIO_LIMIT0, 4, 0, 0 }, { FLM_PRIO_LIMIT1, 4, 8, 0 },
+ { FLM_PRIO_LIMIT2, 4, 16, 0 }, { FLM_PRIO_LIMIT3, 4, 24, 0 },
+};
+
+static nthw_fpga_field_init_s flm_pst_ctrl_fields[] = {
+ { FLM_PST_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_PST_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_pst_data_fields[] = {
+ { FLM_PST_DATA_BP, 5, 0, 0x0000 },
+ { FLM_PST_DATA_PP, 5, 5, 0x0000 },
+ { FLM_PST_DATA_TP, 5, 10, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_ctrl_fields[] = {
+ { FLM_RCP_CTRL_ADR, 5, 0, 0x0000 },
+ { FLM_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_data_fields[] = {
+ { FLM_RCP_DATA_AUTO_IPV4_MASK, 1, 402, 0x0000 },
+ { FLM_RCP_DATA_BYT_DYN, 5, 387, 0x0000 },
+ { FLM_RCP_DATA_BYT_OFS, 8, 392, 0x0000 },
+ { FLM_RCP_DATA_IPN, 1, 386, 0x0000 },
+ { FLM_RCP_DATA_KID, 8, 377, 0x0000 },
+ { FLM_RCP_DATA_LOOKUP, 1, 0, 0x0000 },
+ { FLM_RCP_DATA_MASK, 320, 57, 0x0000 },
+ { FLM_RCP_DATA_OPN, 1, 385, 0x0000 },
+ { FLM_RCP_DATA_QW0_DYN, 5, 1, 0x0000 },
+ { FLM_RCP_DATA_QW0_OFS, 8, 6, 0x0000 },
+ { FLM_RCP_DATA_QW0_SEL, 2, 14, 0x0000 },
+ { FLM_RCP_DATA_QW4_DYN, 5, 16, 0x0000 },
+ { FLM_RCP_DATA_QW4_OFS, 8, 21, 0x0000 },
+ { FLM_RCP_DATA_SW8_DYN, 5, 29, 0x0000 },
+ { FLM_RCP_DATA_SW8_OFS, 8, 34, 0x0000 },
+ { FLM_RCP_DATA_SW8_SEL, 2, 42, 0x0000 },
+ { FLM_RCP_DATA_SW9_DYN, 5, 44, 0x0000 },
+ { FLM_RCP_DATA_SW9_OFS, 8, 49, 0x0000 },
+ { FLM_RCP_DATA_TXPLM, 2, 400, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scan_fields[] = {
+ { FLM_SCAN_I, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s flm_status_fields[] = {
+ { FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
+ { FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
+ { FLM_STATUS_CALIB_SUCCESS, 3, 0, 0 },
+ { FLM_STATUS_CRCERR, 1, 10, 0x0000 },
+ { FLM_STATUS_CRITICAL, 1, 8, 0x0000 },
+ { FLM_STATUS_EFT_BP, 1, 11, 0x0000 },
+ { FLM_STATUS_IDLE, 1, 7, 0x0000 },
+ { FLM_STATUS_INITDONE, 1, 6, 0x0000 },
+ { FLM_STATUS_PANIC, 1, 9, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_done_fields[] = {
+ { FLM_STAT_AUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_fail_fields[] = {
+ { FLM_STAT_AUL_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_ignore_fields[] = {
+ { FLM_STAT_AUL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_hit_fields[] = {
+ { FLM_STAT_CSH_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_miss_fields[] = {
+ { FLM_STAT_CSH_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_unh_fields[] = {
+ { FLM_STAT_CSH_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_move_fields[] = {
+ { FLM_STAT_CUC_MOVE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_start_fields[] = {
+ { FLM_STAT_CUC_START_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_flows_fields[] = {
+ { FLM_STAT_FLOWS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_done_fields[] = {
+ { FLM_STAT_INF_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_skip_fields[] = {
+ { FLM_STAT_INF_SKIP_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_done_fields[] = {
+ { FLM_STAT_LRN_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_fail_fields[] = {
+ { FLM_STAT_LRN_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_ignore_fields[] = {
+ { FLM_STAT_LRN_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_dis_fields[] = {
+ { FLM_STAT_PCK_DIS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_hit_fields[] = {
+ { FLM_STAT_PCK_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_miss_fields[] = {
+ { FLM_STAT_PCK_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_unh_fields[] = {
+ { FLM_STAT_PCK_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_done_fields[] = {
+ { FLM_STAT_PRB_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_ignore_fields[] = {
+ { FLM_STAT_PRB_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_done_fields[] = {
+ { FLM_STAT_REL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_ignore_fields[] = {
+ { FLM_STAT_REL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_sta_done_fields[] = {
+ { FLM_STAT_STA_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_tul_done_fields[] = {
+ { FLM_STAT_TUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_done_fields[] = {
+ { FLM_STAT_UNL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_ignore_fields[] = {
+ { FLM_STAT_UNL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_sta_data_fields[] = {
+ { FLM_STA_DATA_EOR, 1, 95, 0x0000 }, { FLM_STA_DATA_ID, 32, 0, 0x0000 },
+ { FLM_STA_DATA_LDS, 1, 32, 0x0000 }, { FLM_STA_DATA_LFS, 1, 33, 0x0000 },
+ { FLM_STA_DATA_LIS, 1, 34, 0x0000 }, { FLM_STA_DATA_PDS, 1, 39, 0x0000 },
+ { FLM_STA_DATA_PIS, 1, 40, 0x0000 }, { FLM_STA_DATA_RDS, 1, 37, 0x0000 },
+ { FLM_STA_DATA_RIS, 1, 38, 0x0000 }, { FLM_STA_DATA_UDS, 1, 35, 0x0000 },
+ { FLM_STA_DATA_UIS, 1, 36, 0x0000 },
+};
+
+static nthw_fpga_register_init_s flm_registers[] = {
+ { FLM_BUF_CTRL, 14, 48, NTHW_FPGA_REG_TYPE_RW, 0, 3, flm_buf_ctrl_fields },
+ { FLM_CONTROL, 0, 31, NTHW_FPGA_REG_TYPE_MIXED, 134217728, 18, flm_control_fields },
+ { FLM_INF_DATA, 16, 288, NTHW_FPGA_REG_TYPE_RO, 0, 6, flm_inf_data_fields },
+ { FLM_LOAD_APS, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_aps_fields },
+ { FLM_LOAD_BIN, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_load_bin_fields },
+ { FLM_LOAD_LPS, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_lps_fields },
+ { FLM_LRN_DATA, 15, 768, NTHW_FPGA_REG_TYPE_WO, 0, 34, flm_lrn_data_fields },
+ { FLM_PRIO, 6, 32, NTHW_FPGA_REG_TYPE_WO, 269488144, 8, flm_prio_fields },
+ { FLM_PST_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_pst_ctrl_fields },
+ { FLM_PST_DATA, 13, 15, NTHW_FPGA_REG_TYPE_WO, 0, 3, flm_pst_data_fields },
+ { FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
+ { FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
+ { FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
+ { FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
+ { FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
+ { FLM_STAT_AUL_IGNORE, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_ignore_fields },
+ { FLM_STAT_CSH_HIT, 52, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_hit_fields },
+ { FLM_STAT_CSH_MISS, 53, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_miss_fields },
+ { FLM_STAT_CSH_UNH, 54, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_unh_fields },
+ { FLM_STAT_CUC_MOVE, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_move_fields },
+ { FLM_STAT_CUC_START, 55, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_start_fields },
+ { FLM_STAT_FLOWS, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_flows_fields },
+ { FLM_STAT_INF_DONE, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_done_fields },
+ { FLM_STAT_INF_SKIP, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_skip_fields },
+ { FLM_STAT_LRN_DONE, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_done_fields },
+ { FLM_STAT_LRN_FAIL, 34, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_fail_fields },
+ { FLM_STAT_LRN_IGNORE, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_ignore_fields },
+ { FLM_STAT_PCK_DIS, 51, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_dis_fields },
+ { FLM_STAT_PCK_HIT, 48, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_hit_fields },
+ { FLM_STAT_PCK_MISS, 49, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_miss_fields },
+ { FLM_STAT_PCK_UNH, 50, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_unh_fields },
+ { FLM_STAT_PRB_DONE, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_done_fields },
+ { FLM_STAT_PRB_IGNORE, 40, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_ignore_fields },
+ { FLM_STAT_REL_DONE, 37, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_done_fields },
+ { FLM_STAT_REL_IGNORE, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_ignore_fields },
+ { FLM_STAT_STA_DONE, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_sta_done_fields },
+ { FLM_STAT_TUL_DONE, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_tul_done_fields },
+ { FLM_STAT_UNL_DONE, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_done_fields },
+ { FLM_STAT_UNL_IGNORE, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_ignore_fields },
+ { FLM_STA_DATA, 17, 96, NTHW_FPGA_REG_TYPE_RO, 0, 11, flm_sta_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1743,6 +2025,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
+ { MOD_FLM, 0, MOD_FLM, 0, 25, NTHW_FPGA_BUS_TYPE_RAB1, 1280, 43, flm_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
@@ -1817,7 +2100,6 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
- { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
@@ -1937,5 +2219,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 43/73] net/ntnic: add HFU module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (41 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 44/73] net/ntnic: add IFR module Serhii Iliushyk
` (30 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Header Field Update module updates protocol fields
if the packets have been changed,
for example length fields and next protocol fields.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 739cabfb1c..82068746b3 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -919,6 +919,41 @@ static nthw_fpga_register_init_s gpio_phy_registers[] = {
{ GPIO_PHY_GPIO, 1, 10, NTHW_FPGA_REG_TYPE_RW, 17, 10, gpio_phy_gpio_fields },
};
+static nthw_fpga_field_init_s hfu_rcp_ctrl_fields[] = {
+ { HFU_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { HFU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s hfu_rcp_data_fields[] = {
+ { HFU_RCP_DATA_LEN_A_ADD_DYN, 5, 15, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_ADD_OFS, 8, 20, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_OL4LEN, 1, 1, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_DYN, 5, 2, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_OFS, 8, 7, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_SUB_DYN, 5, 28, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_WR, 1, 0, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_DYN, 5, 47, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_OFS, 8, 52, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_DYN, 5, 34, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_OFS, 8, 39, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_SUB_DYN, 5, 60, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_WR, 1, 33, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_DYN, 5, 79, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_OFS, 8, 84, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_DYN, 5, 66, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_OFS, 8, 71, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_SUB_DYN, 5, 92, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_WR, 1, 65, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_DYN, 5, 98, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_OFS, 8, 103, 0x0000 },
+ { HFU_RCP_DATA_TTL_WR, 1, 97, 0x0000 },
+};
+
+static nthw_fpga_register_init_s hfu_registers[] = {
+ { HFU_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, hfu_rcp_ctrl_fields },
+ { HFU_RCP_DATA, 1, 111, NTHW_FPGA_REG_TYPE_WO, 0, 22, hfu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s hif_build_time_fields[] = {
{ HIF_BUILD_TIME_TIME, 32, 0, 1726740521 },
};
@@ -2033,6 +2068,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
gpio_phy_registers
},
+ { MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
@@ -2219,5 +2255,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 44/73] net/ntnic: add IFR module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (42 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 43/73] net/ntnic: add HFU module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
` (29 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 82068746b3..509e1f6860 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1095,6 +1095,44 @@ static nthw_fpga_register_init_s hsh_registers[] = {
{ HSH_RCP_DATA, 1, 743, NTHW_FPGA_REG_TYPE_WO, 0, 23, hsh_rcp_data_fields },
};
+static nthw_fpga_field_init_s ifr_counters_ctrl_fields[] = {
+ { IFR_COUNTERS_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_COUNTERS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_counters_data_fields[] = {
+ { IFR_COUNTERS_DATA_DROP, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_ctrl_fields[] = {
+ { IFR_DF_BUF_CTRL_AVAILABLE, 11, 0, 0x0000 },
+ { IFR_DF_BUF_CTRL_MTU_PROFILE, 16, 11, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_data_fields[] = {
+ { IFR_DF_BUF_DATA_FIFO_DAT, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_ctrl_fields[] = {
+ { IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_data_fields[] = {
+ { IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 }, { IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 }, { IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ifr_registers[] = {
+ { IFR_COUNTERS_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_counters_ctrl_fields },
+ { IFR_COUNTERS_DATA, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_counters_data_fields },
+ { IFR_DF_BUF_CTRL, 2, 27, NTHW_FPGA_REG_TYPE_RO, 0, 2, ifr_df_buf_ctrl_fields },
+ { IFR_DF_BUF_DATA, 3, 128, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_df_buf_data_fields },
+ { IFR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_rcp_ctrl_fields },
+ { IFR_RCP_DATA, 1, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, ifr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s iic_adr_fields[] = {
{ IIC_ADR_SLV_ADR, 7, 1, 0 },
};
@@ -2071,6 +2109,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
+ { MOD_IFR, 0, MOD_IFR, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 9984, 6, ifr_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
{ MOD_IIC, 1, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 896, 22, iic_registers },
{ MOD_IIC, 2, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 24832, 22, iic_registers },
@@ -2255,5 +2294,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 45/73] net/ntnic: add MAC Rx module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (43 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 44/73] net/ntnic: add IFR module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
` (28 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 61 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +++++++++
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 509e1f6860..eecd6342c0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1774,6 +1774,63 @@ static nthw_fpga_register_init_s mac_pcs_registers[] = {
},
};
+static nthw_fpga_field_init_s mac_rx_bad_fcs_fields[] = {
+ { MAC_RX_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_fragment_fields[] = {
+ { MAC_RX_FRAGMENT_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_bad_fcs_fields[] = {
+ { MAC_RX_PACKET_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_small_fields[] = {
+ { MAC_RX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_bytes_fields[] = {
+ { MAC_RX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_bytes_fields[] = {
+ { MAC_RX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_packets_fields[] = {
+ { MAC_RX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_packets_fields[] = {
+ { MAC_RX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_undersize_fields[] = {
+ { MAC_RX_UNDERSIZE_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_rx_registers[] = {
+ { MAC_RX_BAD_FCS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_bad_fcs_fields },
+ { MAC_RX_FRAGMENT, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_fragment_fields },
+ {
+ MAC_RX_PACKET_BAD_FCS, 7, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_packet_bad_fcs_fields
+ },
+ { MAC_RX_PACKET_SMALL, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_packet_small_fields },
+ { MAC_RX_TOTAL_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_bytes_fields },
+ {
+ MAC_RX_TOTAL_GOOD_BYTES, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_bytes_fields
+ },
+ {
+ MAC_RX_TOTAL_GOOD_PACKETS, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_packets_fields
+ },
+ { MAC_RX_TOTAL_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_packets_fields },
+ { MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2123,6 +2180,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_MAC_PCS, 1, MOD_MAC_PCS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB2, 11776, 44,
mac_pcs_registers
},
+ { MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
+ { MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2294,5 +2353,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index b6be02f45e..5983ba7095 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -29,6 +29,7 @@
#define MOD_IIC (0x7629cddbUL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
+#define MOD_MAC_RX (0x6347b490UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -43,7 +44,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (14)
+#define MOD_IDX_COUNT (31)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 3560eeda7d..5ebbec6c7e 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -30,6 +30,7 @@
#include "nthw_fpga_reg_defs_ins.h"
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
+#include "nthw_fpga_reg_defs_mac_rx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
new file mode 100644
index 0000000000..3829c10f3b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_RX_
+#define _NTHW_FPGA_REG_DEFS_MAC_RX_
+
+/* MAC_RX */
+#define MAC_RX_BAD_FCS (0xca07f618UL)
+#define MAC_RX_BAD_FCS_COUNT (0x11d5ba0eUL)
+#define MAC_RX_FRAGMENT (0x5363b736UL)
+#define MAC_RX_FRAGMENT_COUNT (0xf664c9aUL)
+#define MAC_RX_PACKET_BAD_FCS (0x4cb8b34cUL)
+#define MAC_RX_PACKET_BAD_FCS_COUNT (0xb6701e28UL)
+#define MAC_RX_PACKET_SMALL (0xed318a65UL)
+#define MAC_RX_PACKET_SMALL_COUNT (0x72095ec7UL)
+#define MAC_RX_TOTAL_BYTES (0x831313e2UL)
+#define MAC_RX_TOTAL_BYTES_COUNT (0xe5d8be59UL)
+#define MAC_RX_TOTAL_GOOD_BYTES (0x912c2d1cUL)
+#define MAC_RX_TOTAL_GOOD_BYTES_COUNT (0x63bb5f3eUL)
+#define MAC_RX_TOTAL_GOOD_PACKETS (0xfbb4f497UL)
+#define MAC_RX_TOTAL_GOOD_PACKETS_COUNT (0xae9d21b0UL)
+#define MAC_RX_TOTAL_PACKETS (0xb0ea3730UL)
+#define MAC_RX_TOTAL_PACKETS_COUNT (0x532c885dUL)
+#define MAC_RX_UNDERSIZE (0xb6fa4bdbUL)
+#define MAC_RX_UNDERSIZE_COUNT (0x471945ffUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_RX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 46/73] net/ntnic: add MAC Tx module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (44 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
` (27 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Media Access Control Transmit module contains counters
that keep track on transmitted packets.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 ++++++++++
4 files changed, 61 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index eecd6342c0..7a2f5aec32 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1831,6 +1831,40 @@ static nthw_fpga_register_init_s mac_rx_registers[] = {
{ MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
};
+static nthw_fpga_field_init_s mac_tx_packet_small_fields[] = {
+ { MAC_TX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_bytes_fields[] = {
+ { MAC_TX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_bytes_fields[] = {
+ { MAC_TX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_packets_fields[] = {
+ { MAC_TX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_packets_fields[] = {
+ { MAC_TX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_tx_registers[] = {
+ { MAC_TX_PACKET_SMALL, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_packet_small_fields },
+ { MAC_TX_TOTAL_BYTES, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_bytes_fields },
+ {
+ MAC_TX_TOTAL_GOOD_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_bytes_fields
+ },
+ {
+ MAC_TX_TOTAL_GOOD_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_packets_fields
+ },
+ { MAC_TX_TOTAL_PACKETS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_packets_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2182,6 +2216,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
},
{ MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
{ MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
+ { MOD_MAC_TX, 0, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 11264, 5, mac_tx_registers },
+ { MOD_MAC_TX, 1, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12800, 5, mac_tx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2353,5 +2389,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 5983ba7095..f4a913f3d2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -30,6 +30,7 @@
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
+#define MOD_MAC_TX (0x351d1316UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -44,7 +45,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (31)
+#define MOD_IDX_COUNT (32)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 5ebbec6c7e..7741aa563f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -31,6 +31,7 @@
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
#include "nthw_fpga_reg_defs_mac_rx.h"
+#include "nthw_fpga_reg_defs_mac_tx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
new file mode 100644
index 0000000000..6a77d449ae
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_TX_
+#define _NTHW_FPGA_REG_DEFS_MAC_TX_
+
+/* MAC_TX */
+#define MAC_TX_PACKET_SMALL (0xcfcb5e97UL)
+#define MAC_TX_PACKET_SMALL_COUNT (0x84345b01UL)
+#define MAC_TX_TOTAL_BYTES (0x7bd15854UL)
+#define MAC_TX_TOTAL_BYTES_COUNT (0x61fb238cUL)
+#define MAC_TX_TOTAL_GOOD_BYTES (0xcf0260fUL)
+#define MAC_TX_TOTAL_GOOD_BYTES_COUNT (0x8603398UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS (0xd89f151UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS_COUNT (0x12c47c77UL)
+#define MAC_TX_TOTAL_PACKETS (0xe37b5ed4UL)
+#define MAC_TX_TOTAL_PACKETS_COUNT (0x21ddd2ddUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_TX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 47/73] net/ntnic: add RPP LR module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (45 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
` (26 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The RX Packet Process for Local Retransmit module can add bytes
in the FPGA TX pipeline, which is needed when the packet increases in size.
Note, this makes room for packet expansion,
but the actual expansion is done by the modules.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 32 ++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 7a2f5aec32..33437da204 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2138,6 +2138,35 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
+ { RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_data_fields[] = {
+ { RPP_LR_IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_ctrl_fields[] = {
+ { RPP_LR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_data_fields[] = {
+ { RPP_LR_RCP_DATA_EXP, 14, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpp_lr_registers[] = {
+ { RPP_LR_IFR_RCP_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_ifr_rcp_ctrl_fields },
+ { RPP_LR_IFR_RCP_DATA, 3, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, rpp_lr_ifr_rcp_data_fields },
+ { RPP_LR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_rcp_ctrl_fields },
+ { RPP_LR_RCP_DATA, 1, 14, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpp_lr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s rst9563_ctrl_fields[] = {
{ RST9563_CTRL_PTP_MMCM_CLKSEL, 1, 2, 1 },
{ RST9563_CTRL_TS_CLKSEL, 1, 1, 1 },
@@ -2230,6 +2259,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_QSL, 0, MOD_QSL, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 1792, 8, qsl_registers },
{ MOD_RAC, 0, MOD_RAC, 3, 0, NTHW_FPGA_BUS_TYPE_PCI, 8192, 14, rac_registers },
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
+ { MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
};
@@ -2389,5 +2419,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 48/73] net/ntnic: add MOD SLC LR
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (46 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
` (25 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 33437da204..0f69f89527 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2214,6 +2214,23 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
+static nthw_fpga_field_init_s slc_rcp_ctrl_fields[] = {
+ { SLC_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { SLC_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s slc_rcp_data_fields[] = {
+ { SLC_RCP_DATA_HEAD_DYN, 5, 1, 0x0000 }, { SLC_RCP_DATA_HEAD_OFS, 8, 6, 0x0000 },
+ { SLC_RCP_DATA_HEAD_SLC_EN, 1, 0, 0x0000 }, { SLC_RCP_DATA_PCAP, 1, 35, 0x0000 },
+ { SLC_RCP_DATA_TAIL_DYN, 5, 15, 0x0000 }, { SLC_RCP_DATA_TAIL_OFS, 15, 20, 0x0000 },
+ { SLC_RCP_DATA_TAIL_SLC_EN, 1, 14, 0x0000 },
+};
+
+static nthw_fpga_register_init_s slc_registers[] = {
+ { SLC_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, slc_rcp_ctrl_fields },
+ { SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2261,6 +2278,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
+ { MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2419,5 +2437,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index f4a913f3d2..865dd6a084 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,11 +41,12 @@
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
+#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (32)
+#define MOD_IDX_COUNT (33)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 49/73] net/ntnic: add Tx CPY module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (47 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
` (24 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Copy module writes data to packet fields based on the lookup
performed by the FLM module.
This is used for NAT and can support other actions based
on the RTE action MODIFY_FIELD.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 204 +++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 205 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 0f69f89527..60fd748ea2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,207 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s cpy_packet_reader0_ctrl_fields[] = {
+ { CPY_PACKET_READER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_PACKET_READER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_packet_reader0_data_fields[] = {
+ { CPY_PACKET_READER0_DATA_DYN, 5, 10, 0x0000 },
+ { CPY_PACKET_READER0_DATA_OFS, 10, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_ctrl_fields[] = {
+ { CPY_WRITER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_data_fields[] = {
+ { CPY_WRITER0_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER0_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER0_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER0_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER0_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_ctrl_fields[] = {
+ { CPY_WRITER0_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_data_fields[] = {
+ { CPY_WRITER0_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_ctrl_fields[] = {
+ { CPY_WRITER1_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_data_fields[] = {
+ { CPY_WRITER1_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER1_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER1_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER1_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER1_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_ctrl_fields[] = {
+ { CPY_WRITER1_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_data_fields[] = {
+ { CPY_WRITER1_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_ctrl_fields[] = {
+ { CPY_WRITER2_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_data_fields[] = {
+ { CPY_WRITER2_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER2_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER2_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER2_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER2_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_ctrl_fields[] = {
+ { CPY_WRITER2_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_data_fields[] = {
+ { CPY_WRITER2_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_ctrl_fields[] = {
+ { CPY_WRITER3_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_data_fields[] = {
+ { CPY_WRITER3_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER3_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER3_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER3_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER3_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_ctrl_fields[] = {
+ { CPY_WRITER3_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_data_fields[] = {
+ { CPY_WRITER3_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_ctrl_fields[] = {
+ { CPY_WRITER4_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_data_fields[] = {
+ { CPY_WRITER4_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER4_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER4_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER4_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER4_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_ctrl_fields[] = {
+ { CPY_WRITER4_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_data_fields[] = {
+ { CPY_WRITER4_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_ctrl_fields[] = {
+ { CPY_WRITER5_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_data_fields[] = {
+ { CPY_WRITER5_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER5_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER5_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER5_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER5_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_ctrl_fields[] = {
+ { CPY_WRITER5_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_data_fields[] = {
+ { CPY_WRITER5_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s cpy_registers[] = {
+ {
+ CPY_PACKET_READER0_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_ctrl_fields
+ },
+ {
+ CPY_PACKET_READER0_DATA, 25, 15, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_data_fields
+ },
+ { CPY_WRITER0_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer0_ctrl_fields },
+ { CPY_WRITER0_DATA, 1, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer0_data_fields },
+ {
+ CPY_WRITER0_MASK_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer0_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER0_MASK_DATA, 3, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer0_mask_data_fields
+ },
+ { CPY_WRITER1_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer1_ctrl_fields },
+ { CPY_WRITER1_DATA, 5, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer1_data_fields },
+ {
+ CPY_WRITER1_MASK_CTRL, 6, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer1_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER1_MASK_DATA, 7, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer1_mask_data_fields
+ },
+ { CPY_WRITER2_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer2_ctrl_fields },
+ { CPY_WRITER2_DATA, 9, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer2_data_fields },
+ {
+ CPY_WRITER2_MASK_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer2_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER2_MASK_DATA, 11, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer2_mask_data_fields
+ },
+ { CPY_WRITER3_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer3_ctrl_fields },
+ { CPY_WRITER3_DATA, 13, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer3_data_fields },
+ {
+ CPY_WRITER3_MASK_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer3_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER3_MASK_DATA, 15, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer3_mask_data_fields
+ },
+ { CPY_WRITER4_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer4_ctrl_fields },
+ { CPY_WRITER4_DATA, 17, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer4_data_fields },
+ {
+ CPY_WRITER4_MASK_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer4_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER4_MASK_DATA, 19, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer4_mask_data_fields
+ },
+ { CPY_WRITER5_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer5_ctrl_fields },
+ { CPY_WRITER5_DATA, 21, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer5_data_fields },
+ {
+ CPY_WRITER5_MASK_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer5_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER5_MASK_DATA, 23, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer5_mask_data_fields
+ },
+};
+
static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
{ CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2279,6 +2480,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
+ { MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2437,5 +2639,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 865dd6a084..0ab5ae0310 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -15,6 +15,7 @@
#define MOD_UNKNOWN (0L)/* Unknown/uninitialized - keep this as the first element */
#define MOD_CAT (0x30b447c2UL)
+#define MOD_CPY (0x1ddc186fUL)
#define MOD_CSU (0x3f470787UL)
#define MOD_DBS (0x80b29727UL)
#define MOD_FLM (0xe7ba53a4UL)
@@ -46,7 +47,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (33)
+#define MOD_IDX_COUNT (34)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 50/73] net/ntnic: add Tx INS module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (48 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
` (23 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Inserter module injects zeros into an offset of a packet,
effectively expanding the packet.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 19 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 60fd748ea2..c8841b1dc2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1457,6 +1457,22 @@ static nthw_fpga_register_init_s iic_registers[] = {
{ IIC_TX_FIFO_OCY, 69, 4, NTHW_FPGA_REG_TYPE_RO, 0, 1, iic_tx_fifo_ocy_fields },
};
+static nthw_fpga_field_init_s ins_rcp_ctrl_fields[] = {
+ { INS_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { INS_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ins_rcp_data_fields[] = {
+ { INS_RCP_DATA_DYN, 5, 0, 0x0000 },
+ { INS_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { INS_RCP_DATA_OFS, 10, 5, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ins_registers[] = {
+ { INS_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ins_rcp_ctrl_fields },
+ { INS_RCP_DATA, 1, 23, NTHW_FPGA_REG_TYPE_WO, 0, 3, ins_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s km_cam_ctrl_fields[] = {
{ KM_CAM_CTRL_ADR, 13, 0, 0x0000 },
{ KM_CAM_CTRL_CNT, 16, 16, 0x0000 },
@@ -2481,6 +2497,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
+ { MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2639,5 +2656,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0ab5ae0310..8c0c727e16 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -28,6 +28,7 @@
#define MOD_I2CM (0x93bc7780UL)
#define MOD_IFR (0x9b01f1e6UL)
#define MOD_IIC (0x7629cddbUL)
+#define MOD_INS (0x24df4b78UL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
@@ -47,7 +48,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (34)
+#define MOD_IDX_COUNT (35)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 51/73] net/ntnic: add Tx RPL module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (49 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
` (22 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Replacer module can replace a range of bytes in a packet.
The replacing data is stored in a table in the module
and will often contain tunnel data.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index c8841b1dc2..a3d9f94fc6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2355,6 +2355,44 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpl_ext_ctrl_fields[] = {
+ { RPL_EXT_CTRL_ADR, 10, 0, 0x0000 },
+ { RPL_EXT_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_ext_data_fields[] = {
+ { RPL_EXT_DATA_RPL_PTR, 12, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_ctrl_fields[] = {
+ { RPL_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPL_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_data_fields[] = {
+ { RPL_RCP_DATA_DYN, 5, 0, 0x0000 }, { RPL_RCP_DATA_ETH_TYPE_WR, 1, 36, 0x0000 },
+ { RPL_RCP_DATA_EXT_PRIO, 1, 35, 0x0000 }, { RPL_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { RPL_RCP_DATA_OFS, 10, 5, 0x0000 }, { RPL_RCP_DATA_RPL_PTR, 12, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_ctrl_fields[] = {
+ { RPL_RPL_CTRL_ADR, 12, 0, 0x0000 },
+ { RPL_RPL_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_data_fields[] = {
+ { RPL_RPL_DATA_VALUE, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpl_registers[] = {
+ { RPL_EXT_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_ext_ctrl_fields },
+ { RPL_EXT_DATA, 3, 12, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_ext_data_fields },
+ { RPL_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rcp_ctrl_fields },
+ { RPL_RCP_DATA, 1, 37, NTHW_FPGA_REG_TYPE_WO, 0, 6, rpl_rcp_data_fields },
+ { RPL_RPL_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rpl_ctrl_fields },
+ { RPL_RPL_DATA, 5, 128, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_rpl_data_fields },
+};
+
static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
{ RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2498,6 +2536,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
+ { MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2656,5 +2695,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 8c0c727e16..2b059d98ff 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -40,6 +40,7 @@
#define MOD_QSL (0x448ed859UL)
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
+#define MOD_RPL (0x6de535c3UL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
@@ -48,7 +49,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (35)
+#define MOD_IDX_COUNT (36)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 52/73] net/ntnic: update alignment for virt queue structs
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (50 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
` (21 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update incorrect alignment
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Fix __rte_packed usage
Original NT PMD driver use pragma pack(1) wich is similar with
combination attributes packed and aligned
In this case aligned(1) can be ignored in case of use
attribute packed
---
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
index bde0fed273..e46a3bef28 100644
--- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
+++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <rte_common.h>
#include <unistd.h>
#include "ntos_drv.h"
@@ -67,20 +68,20 @@
} \
} while (0)
-struct __rte_aligned(8) virtq_avail {
+struct __rte_packed virtq_avail {
uint16_t flags;
uint16_t idx;
uint16_t ring[]; /* Queue Size */
};
-struct __rte_aligned(8) virtq_used_elem {
+struct __rte_packed virtq_used_elem {
/* Index of start of used descriptor chain. */
uint32_t id;
/* Total length of the descriptor chain which was used (written to) */
uint32_t len;
};
-struct __rte_aligned(8) virtq_used {
+struct __rte_packed virtq_used {
uint16_t flags;
uint16_t idx;
struct virtq_used_elem ring[]; /* Queue Size */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 53/73] net/ntnic: enable RSS feature
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (51 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 54/73] net/ntnic: add statistics API Serhii Iliushyk
` (20 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable receive side scaling
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 3 +
drivers/net/ntnic/include/create_elements.h | 1 +
drivers/net/ntnic/include/flow_api.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 6 ++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 77 +++++++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 73 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 212 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4cb9509742..e5d5abd0ed 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -10,6 +10,8 @@ Link status = Y
Queue start/stop = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
Linux = Y
x86-64 = Y
@@ -37,3 +39,4 @@ port_id = Y
queue = Y
raw_decap = Y
raw_encap = Y
+rss = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 70e6cad195..eaa578e72a 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,7 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_rss flow_rss;
struct flow_action_raw_encap encap;
struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 2e96fa5bed..4a1525f237 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -114,6 +114,8 @@ struct flow_nic_dev {
struct flow_eth_dev *eth_base;
pthread_mutex_t mtx;
+ /* RSS hashing configuration */
+ struct nt_eth_rss_conf rss_conf;
/* next NIC linked list */
struct flow_nic_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34f2cad2cd..d61044402d 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1061,6 +1061,12 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+
+ /*
+ * Other
+ */
+ .hw_mod_hsh_rcp_flush = hw_mod_hsh_rcp_flush,
+ .flow_nic_set_hasher_fields = flow_nic_set_hasher_fields,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1dfd96eaac..bbf450697c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -603,6 +603,49 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RSS", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_rss rss_tmp;
+ const struct rte_flow_action_rss *rss =
+ memcpy_mask_if(&rss_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_rss));
+
+ if (rss->key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: RSS hash key length %u exceeds maximum value %u",
+ rss->key_len, MAX_RSS_KEY_LEN);
+ flow_nic_set_error(ERR_RSS_TOO_LONG_KEY, error);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < rss->queue_num; ++i) {
+ int hw_id = rx_queue_idx_to_hw_id(dev, rss->queue[i]);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+ }
+
+ fd->hsh.func = rss->func;
+ fd->hsh.types = rss->types;
+ fd->hsh.key = rss->key;
+ fd->hsh.key_len = rss->key_len;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RSS func: %d, types: 0x%" PRIX64 ", key_len: %d",
+ dev, rss->func, rss->types, rss->key_len);
+
+ fd->full_offload = 0;
+ *num_queues += rss->queue_num;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MARK:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bfca8f28b1..1b25621537 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -214,6 +214,14 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_rx_pktlen = HW_MAX_PKT_LEN;
dev_info->max_mtu = MAX_MTU;
+ if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
+ dev_info->hash_key_size = MAX_RSS_KEY_LEN;
+
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ);
+ }
+
if (internals->p_drv) {
dev_info->max_rx_queues = internals->nb_rx_queues;
dev_info->max_tx_queues = internals->nb_tx_queues;
@@ -1372,6 +1380,73 @@ promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
return 0;
}
+static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+ struct nt_eth_rss_conf tmp_rss_conf = { 0 };
+ const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
+ int res = 0;
+
+ if (rss_conf->rss_key != NULL) {
+ if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, NTNIC,
+ "ERROR: - RSS hash key length %u exceeds maximum value %u",
+ rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ return -1;
+ }
+
+ rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+ }
+
+ tmp_rss_conf.algorithm = rss_conf->algorithm;
+
+ tmp_rss_conf.rss_hf = rss_conf->rss_hf;
+ res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
+
+ if (res == 0) {
+ flow_filter_ops->hw_mod_hsh_rcp_flush(&ndev->be, hsh_idx, 1);
+ rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
+
+ } else {
+ NT_LOG(ERR, NTNIC, "ERROR: - RSS hash update failed with error %i", res);
+ }
+
+ return res;
+}
+
+static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+
+ rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
+
+ rss_conf->rss_hf = ndev->rss_conf.rss_hf;
+
+ /*
+ * copy full stored key into rss_key and pad it with
+ * zeros up to rss_key_len / MAX_RSS_KEY_LEN
+ */
+ if (rss_conf->rss_key != NULL) {
+ int key_len = rss_conf->rss_key_len < MAX_RSS_KEY_LEN ? rss_conf->rss_key_len
+ : MAX_RSS_KEY_LEN;
+ memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
+ rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
+ rss_conf->rss_key_len = key_len;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
@@ -1395,6 +1470,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
+ .rss_hash_update = eth_dev_rss_hash_update,
+ .rss_hash_conf_get = rss_hash_conf_get,
};
/*
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 87b26bd315..4962ab8d5a 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -317,6 +317,79 @@ int create_action_elements_inline(struct cnv_action_s *action,
* Non-compatible actions handled here
*/
switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)actions[aidx].conf;
+
+ switch (rss->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ;
+
+ if (rte_is_power_of_2(rss->queue_num) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - for Toeplitz the number of queues must be power of two");
+ return -1;
+ }
+
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT:
+ case RTE_ETH_HASH_FUNCTION_MAX:
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported function: %u",
+ rss->func);
+ return -1;
+ }
+
+ uint64_t tmp_rss_types = 0;
+
+ switch (rss->level) {
+ case 1:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_OUTERMOST;
+ break;
+
+ case 2:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_INNERMOST;
+ break;
+
+ case 0:
+ /* keep level mask specified at types */
+ action->flow_rss.types = rss->types;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported level: %u",
+ rss->level);
+ return -1;
+ }
+
+ action->flow_rss.level = 0;
+ action->flow_rss.key_len = rss->key_len;
+ action->flow_rss.queue_num = rss->queue_num;
+ action->flow_rss.key = rss->key;
+ action->flow_rss.queue = rss->queue;
+ action->flow_actions[aidx].conf = &action->flow_rss;
+ }
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
const struct rte_flow_action_raw_decap *decap =
(const struct rte_flow_action_raw_decap *)actions[aidx]
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 12baa13800..e40ed9b949 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -316,6 +316,13 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+
+ /*
+ * Other
+ */
+ int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+ int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 54/73] net/ntnic: add statistics API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (52 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 55/73] net/ntnic: add rpf module Serhii Iliushyk
` (19 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Statistics init, setup, get, reset APIs and their
implementation were added.
Statistics fpga defines were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 192 +++++++++
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 149 +++++++
drivers/net/ntnic/include/ntos_drv.h | 9 +
.../ntnic/include/stream_binary_flow_api.h | 5 +
drivers/net/ntnic/meson.build | 3 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 1 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 10 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 370 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 40 ++
drivers/net/ntnic/ntnic_ethdev.c | 119 +++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 132 +++++++
drivers/net/ntnic/ntnic_mod_reg.c | 30 ++
drivers/net/ntnic/ntnic_mod_reg.h | 17 +
drivers/net/ntnic/ntutil/nt_util.h | 1 +
21 files changed, 1119 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_adapter.c b/drivers/net/ntnic/adapter/nt4ga_adapter.c
index d9e6716c30..fa72dfda8d 100644
--- a/drivers/net/ntnic/adapter/nt4ga_adapter.c
+++ b/drivers/net/ntnic/adapter/nt4ga_adapter.c
@@ -212,19 +212,26 @@ static int nt4ga_adapter_init(struct adapter_info_s *p_adapter_info)
}
}
- nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
- if (p_nthw_rmc == NULL) {
- NT_LOG(ERR, NTNIC, "Failed to allocate memory for RMC module");
- return -1;
- }
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
- res = nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
- if (res) {
- NT_LOG(ERR, NTNIC, "Failed to initialize RMC module");
- return -1;
- }
+ if (nt4ga_stat_ops != NULL) {
+ /* Nt4ga Stat init/setup */
+ res = nt4ga_stat_ops->nt4ga_stat_init(p_adapter_info);
+
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot initialize the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+
+ res = nt4ga_stat_ops->nt4ga_stat_setup(p_adapter_info);
- nthw_rmc_unblock(p_nthw_rmc, false);
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot setup the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+ }
return 0;
}
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
new file mode 100644
index 0000000000..0e20f3ea45
--- /dev/null
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -0,0 +1,192 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+#include "nt_util.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "nthw_fpga_param_defs.h"
+#include "nt4ga_adapter.h"
+#include "ntnic_nim.h"
+#include "flow_filter.h"
+#include "ntnic_mod_reg.h"
+
+#define DEFAULT_MAX_BPS_SPEED 100e9
+
+static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
+{
+ const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
+ fpga_info_t *fpga_info = &p_adapter_info->fpga_info;
+ nthw_fpga_t *p_fpga = fpga_info->mp_fpga;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+
+ if (p_nt4ga_stat) {
+ memset(p_nt4ga_stat, 0, sizeof(nt4ga_stat_t));
+
+ } else {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ {
+ nthw_stat_t *p_nthw_stat = nthw_stat_new();
+
+ if (!p_nthw_stat) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ if (nthw_rmc_init(NULL, p_fpga, 0) == 0) {
+ nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
+
+ if (!p_nthw_rmc) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
+ p_nt4ga_stat->mp_nthw_rmc = p_nthw_rmc;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rmc = NULL;
+ }
+
+ p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
+ nthw_stat_init(p_nthw_stat, p_fpga, 0);
+
+ p_nt4ga_stat->mn_rx_host_buffers = p_nthw_stat->m_nb_rx_host_buffers;
+ p_nt4ga_stat->mn_tx_host_buffers = p_nthw_stat->m_nb_tx_host_buffers;
+
+ p_nt4ga_stat->mn_rx_ports = p_nthw_stat->m_nb_rx_ports;
+ p_nt4ga_stat->mn_tx_ports = p_nthw_stat->m_nb_tx_ports;
+ }
+
+ return 0;
+}
+
+static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
+{
+ const int n_physical_adapter_no = p_adapter_info->adapter_no;
+ (void)n_physical_adapter_no;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+
+ /* Allocate and map memory for fpga statistics */
+ {
+ uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
+ sizeof(p_nthw_stat->mp_timestamp));
+ struct nt_dma_s *p_dma;
+ int numa_node = p_adapter_info->fpga_info.numa_node;
+
+ /* FPGA needs a 16K alignment on Statistics */
+ p_dma = nt_dma_alloc(n_stat_size, 0x4000, numa_node);
+
+ if (!p_dma) {
+ NT_LOG_DBGX(ERR, NTNIC, "p_dma alloc failed");
+ return -1;
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%x @%d %" PRIx64 " %" PRIx64, n_stat_size, numa_node,
+ p_dma->addr, p_dma->iova);
+
+ NT_LOG(DBG, NTNIC,
+ "DMA: Physical adapter %02d, PA = 0x%016" PRIX64 " DMA = 0x%016" PRIX64
+ " size = 0x%" PRIX32 "",
+ n_physical_adapter_no, p_dma->iova, p_dma->addr, n_stat_size);
+
+ p_nt4ga_stat->p_stat_dma_virtual = (uint32_t *)p_dma->addr;
+ p_nt4ga_stat->n_stat_size = n_stat_size;
+ p_nt4ga_stat->p_stat_dma = p_dma;
+
+ memset(p_nt4ga_stat->p_stat_dma_virtual, 0xaa, n_stat_size);
+ nthw_stat_set_dma_address(p_nthw_stat, p_dma->iova,
+ p_nt4ga_stat->p_stat_dma_virtual);
+ }
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+
+ p_nt4ga_stat->mp_stat_structs_color =
+ calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_color) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_hb =
+ calloc(p_nt4ga_stat->mn_rx_host_buffers + p_nt4ga_stat->mn_tx_host_buffers,
+ sizeof(struct host_buffer_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_hb) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_rx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_tx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_port_load =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
+
+ if (!p_nt4ga_stat->mp_port_load) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+#ifdef NIM_TRIGGER
+ uint64_t max_bps_speed = nt_get_max_link_speed(p_adapter_info->nt4ga_link.speed_capa);
+
+ if (max_bps_speed == 0)
+ max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+
+#else
+ uint64_t max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+ NT_LOG(ERR, NTNIC, "NIM module not included");
+#endif
+
+ for (int p = 0; p < NUM_ADAPTER_PORTS_MAX; p++) {
+ p_nt4ga_stat->mp_port_load[p].rx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].tx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].rx_pps_max = max_bps_speed / (8 * (20 + 64));
+ p_nt4ga_stat->mp_port_load[p].tx_pps_max = max_bps_speed / (8 * (20 + 64));
+ }
+
+ memset(p_nt4ga_stat->a_stat_structs_color_base, 0,
+ sizeof(struct color_counters) * NT_MAX_COLOR_FLOW_STATS);
+ p_nt4ga_stat->last_timestamp = 0;
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ return 0;
+}
+
+static struct nt4ga_stat_ops ops = {
+ .nt4ga_stat_init = nt4ga_stat_init,
+ .nt4ga_stat_setup = nt4ga_stat_setup,
+};
+
+void nt4ga_stat_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "Stat module was initialized");
+ register_nt4ga_stat_ops(&ops);
+}
diff --git a/drivers/net/ntnic/include/common_adapter_defs.h b/drivers/net/ntnic/include/common_adapter_defs.h
new file mode 100644
index 0000000000..6ed9121f0f
--- /dev/null
+++ b/drivers/net/ntnic/include/common_adapter_defs.h
@@ -0,0 +1,15 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _COMMON_ADAPTER_DEFS_H_
+#define _COMMON_ADAPTER_DEFS_H_
+
+/*
+ * Declarations shared by NT adapter types.
+ */
+#define NUM_ADAPTER_MAX (8)
+#define NUM_ADAPTER_PORTS_MAX (128)
+
+#endif /* _COMMON_ADAPTER_DEFS_H_ */
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index eaa578e72a..1456977837 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -46,6 +46,10 @@ struct rte_flow {
uint32_t flow_stat_id;
+ uint64_t stat_pkts;
+ uint64_t stat_bytes;
+ uint8_t stat_tcp_flags;
+
uint16_t caller_id;
};
diff --git a/drivers/net/ntnic/include/nt4ga_adapter.h b/drivers/net/ntnic/include/nt4ga_adapter.h
index 809135f130..fef79ce358 100644
--- a/drivers/net/ntnic/include/nt4ga_adapter.h
+++ b/drivers/net/ntnic/include/nt4ga_adapter.h
@@ -6,6 +6,7 @@
#ifndef _NT4GA_ADAPTER_H_
#define _NT4GA_ADAPTER_H_
+#include "ntnic_stat.h"
#include "nt4ga_link.h"
typedef struct hw_info_s {
@@ -30,6 +31,7 @@ typedef struct hw_info_s {
#include "ntnic_stat.h"
typedef struct adapter_info_s {
+ struct nt4ga_stat_s nt4ga_stat;
struct nt4ga_filter_s nt4ga_filter;
struct nt4ga_link_s nt4ga_link;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8ebdd98db0..1135e9a539 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -15,6 +15,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
+ pthread_mutex_t stat_lck;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 148088fe1d..2aee3f8425 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -6,6 +6,155 @@
#ifndef NTNIC_STAT_H_
#define NTNIC_STAT_H_
+#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_fpga_model.h"
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+struct nthw_stat {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_stat;
+ int mn_instance;
+
+ int mn_stat_layout_version;
+
+ bool mb_has_tx_stats;
+
+ int m_nb_phy_ports;
+ int m_nb_nim_ports;
+
+ int m_nb_rx_ports;
+ int m_nb_tx_ports;
+
+ int m_nb_rx_host_buffers;
+ int m_nb_tx_host_buffers;
+
+ int m_dbs_present;
+
+ int m_rx_port_replicate;
+
+ int m_nb_color_counters;
+
+ int m_nb_rx_hb_counters;
+ int m_nb_tx_hb_counters;
+
+ int m_nb_rx_port_counters;
+ int m_nb_tx_port_counters;
+
+ int m_nb_counters;
+
+ int m_nb_rpp_per_ps;
+
+ nthw_field_t *mp_fld_dma_ena;
+ nthw_field_t *mp_fld_cnt_clear;
+
+ nthw_field_t *mp_fld_tx_disable;
+
+ nthw_field_t *mp_fld_cnt_freeze;
+
+ nthw_field_t *mp_fld_stat_toggle_missed;
+
+ nthw_field_t *mp_fld_dma_lsb;
+ nthw_field_t *mp_fld_dma_msb;
+
+ nthw_field_t *mp_fld_load_bin;
+ nthw_field_t *mp_fld_load_bps_rx0;
+ nthw_field_t *mp_fld_load_bps_rx1;
+ nthw_field_t *mp_fld_load_bps_tx0;
+ nthw_field_t *mp_fld_load_bps_tx1;
+ nthw_field_t *mp_fld_load_pps_rx0;
+ nthw_field_t *mp_fld_load_pps_rx1;
+ nthw_field_t *mp_fld_load_pps_tx0;
+ nthw_field_t *mp_fld_load_pps_tx1;
+
+ uint64_t m_stat_dma_physical;
+ uint32_t *mp_stat_dma_virtual;
+
+ uint64_t *mp_timestamp;
+};
+
+typedef struct nthw_stat nthw_stat_t;
+typedef struct nthw_stat nthw_stat;
+
+struct color_counters {
+ uint64_t color_packets;
+ uint64_t color_bytes;
+ uint8_t tcp_flags;
+};
+
+struct host_buffer_counters {
+};
+
+struct port_load_counters {
+ uint64_t rx_pps_max;
+ uint64_t tx_pps_max;
+ uint64_t rx_bps_max;
+ uint64_t tx_bps_max;
+};
+
+struct port_counters_v2 {
+};
+
+struct flm_counters_v1 {
+};
+
+struct nt4ga_stat_s {
+ nthw_stat_t *mp_nthw_stat;
+ nthw_rmc_t *mp_nthw_rmc;
+ struct nt_dma_s *p_stat_dma;
+ uint32_t *p_stat_dma_virtual;
+ uint32_t n_stat_size;
+
+ uint64_t last_timestamp;
+
+ int mn_rx_host_buffers;
+ int mn_tx_host_buffers;
+
+ int mn_rx_ports;
+ int mn_tx_ports;
+
+ struct color_counters *mp_stat_structs_color;
+ /* For calculating increments between stats polls */
+ struct color_counters a_stat_structs_color_base[NT_MAX_COLOR_FLOW_STATS];
+
+ /* Port counters for inline */
+ struct {
+ struct port_counters_v2 *mp_stat_structs_port_rx;
+ struct port_counters_v2 *mp_stat_structs_port_tx;
+ } cap;
+
+ struct host_buffer_counters *mp_stat_structs_hb;
+ struct port_load_counters *mp_port_load;
+
+ /* Rx/Tx totals: */
+ uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
+
+ uint64_t a_port_rx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ /* Base is for calculating increments between statistics reads */
+ uint64_t a_port_rx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_packets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_packets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_drops_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_drops_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+};
+
+typedef struct nt4ga_stat_s nt4ga_stat_t;
+
+nthw_stat_t *nthw_stat_new(void);
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_stat_delete(nthw_stat_t *p);
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual);
+int nthw_stat_trigger(nthw_stat_t *p);
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 8fd577dfe3..7b3c8ff3d6 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -57,6 +57,9 @@ struct __rte_cache_aligned ntnic_rx_queue {
struct flow_queue_id_s queue; /* queue info - user id and hw queue index */
struct rte_mempool *mb_pool; /* mbuf memory pool */
uint16_t buf_size; /* Size of data area in mbuf */
+ unsigned long rx_pkts; /* Rx packet statistics */
+ unsigned long rx_bytes; /* Rx bytes statistics */
+ unsigned long err_pkts; /* Rx error packet statistics */
int enabled; /* Enabling/disabling of this queue */
struct hwq_s hwq;
@@ -80,6 +83,9 @@ struct __rte_cache_aligned ntnic_tx_queue {
int rss_target_id;
uint32_t port; /* Tx port for this queue */
+ unsigned long tx_pkts; /* Tx packet statistics */
+ unsigned long tx_bytes; /* Tx bytes statistics */
+ unsigned long err_pkts; /* Tx error packet stat */
int enabled; /* Enabling/disabling of this queue */
enum fpga_info_profile profile; /* Inline / Capture */
};
@@ -95,6 +101,7 @@ struct pmd_internals {
/* Offset of the VF from the PF */
uint8_t vf_offset;
uint32_t port;
+ uint32_t port_id;
nt_meta_port_type_t type;
struct flow_queue_id_s vpq[MAX_QUEUES];
unsigned int vpq_nb_vq;
@@ -107,6 +114,8 @@ struct pmd_internals {
struct rte_ether_addr eth_addrs[NUM_MAC_ADDRS_PER_PORT];
/* Multicast ethernet (MAC) addresses. */
struct rte_ether_addr mc_addrs[NUM_MULTICAST_ADDRS_PER_PORT];
+ uint64_t last_stat_rtc;
+ uint64_t rx_missed;
struct pmd_internals *next;
};
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index e5fe686d99..4ce1561033 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,7 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include <rte_ether.h>
#include "rte_flow.h"
#include "rte_flow_driver.h"
@@ -44,6 +45,10 @@
#define FLOW_MAX_QUEUES 128
#define RAW_ENCAP_DECAP_ELEMS_MAX 16
+
+extern uint64_t rte_tsc_freq;
+extern rte_spinlock_t hwlock;
+
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 92167d24e4..216341bb11 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -25,10 +25,12 @@ includes = [
# all sources
sources = files(
'adapter/nt4ga_adapter.c',
+ 'adapter/nt4ga_stat/nt4ga_stat.c',
'dbsconfig/ntnic_dbsconfig.c',
'link_mgmt/link_100g/nt4ga_link_100g.c',
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
+ 'ntnic_filter/ntnic_filter.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
@@ -48,6 +50,7 @@ sources = files(
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
+ 'nthw/stat/nthw_stat.c',
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index 2345820bdc..b239752674 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -44,6 +44,7 @@ typedef struct nthw_rmc nthw_rmc;
nthw_rmc_t *nthw_rmc_new(void);
int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 4a01424c24..748519aeb4 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,16 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+void nthw_rmc_block(nthw_rmc_t *p)
+{
+ /* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
+ if (!p->mb_administrative_block) {
+ nthw_field_set_flush(p->mp_fld_ctrl_block_stat_drop);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_keep_alive);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_mac_port);
+ }
+}
+
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary)
{
uint32_t n_block_mask = ~0U << (b_is_secondary ? p->mn_nims : p->mn_ports);
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
new file mode 100644
index 0000000000..6adcd2e090
--- /dev/null
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -0,0 +1,370 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "nt_util.h"
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "ntnic_stat.h"
+
+#include <malloc.h>
+
+nthw_stat_t *nthw_stat_new(void)
+{
+ nthw_stat_t *p = malloc(sizeof(nthw_stat_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_stat_t));
+
+ return p;
+}
+
+void nthw_stat_delete(nthw_stat_t *p)
+{
+ if (p)
+ free(p);
+}
+
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ uint64_t n_module_version_packed64 = -1;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_STA, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: STAT %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_stat = mod;
+
+ n_module_version_packed64 = nthw_module_get_version_packed64(p->mp_mod_stat);
+ NT_LOG(DBG, NTHW, "%s: STAT %d: version=0x%08lX", p_adapter_id_str, p->mn_instance,
+ n_module_version_packed64);
+
+ {
+ nthw_register_t *p_reg;
+ /* STA_CFG register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_CFG);
+ p->mp_fld_dma_ena = nthw_register_get_field(p_reg, STA_CFG_DMA_ENA);
+ p->mp_fld_cnt_clear = nthw_register_get_field(p_reg, STA_CFG_CNT_CLEAR);
+
+ /* CFG: fields NOT available from v. 3 */
+ p->mp_fld_tx_disable = nthw_register_query_field(p_reg, STA_CFG_TX_DISABLE);
+ p->mp_fld_cnt_freeze = nthw_register_query_field(p_reg, STA_CFG_CNT_FRZ);
+
+ /* STA_STATUS register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_STATUS);
+ p->mp_fld_stat_toggle_missed =
+ nthw_register_get_field(p_reg, STA_STATUS_STAT_TOGGLE_MISSED);
+
+ /* HOST_ADR registers */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_LSB);
+ p->mp_fld_dma_lsb = nthw_register_get_field(p_reg, STA_HOST_ADR_LSB_LSB);
+
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_MSB);
+ p->mp_fld_dma_msb = nthw_register_get_field(p_reg, STA_HOST_ADR_MSB_MSB);
+
+ /* Binning cycles */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BIN);
+
+ if (p_reg) {
+ p->mp_fld_load_bin = nthw_register_get_field(p_reg, STA_LOAD_BIN_BIN);
+
+ /* Bandwidth load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx0 = NULL;
+ }
+
+ /* Bandwidth load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx1 = NULL;
+ }
+
+ /* Bandwidth load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx0 = NULL;
+ }
+
+ /* Bandwidth load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx1 = NULL;
+ }
+
+ /* Packet load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx0 = NULL;
+ }
+
+ /* Packet load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx1 = NULL;
+ }
+
+ /* Packet load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx0 = NULL;
+ }
+
+ /* Packet load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+
+ } else {
+ p->mp_fld_load_bin = NULL;
+ p->mp_fld_load_bps_rx0 = NULL;
+ p->mp_fld_load_bps_rx1 = NULL;
+ p->mp_fld_load_bps_tx0 = NULL;
+ p->mp_fld_load_bps_tx1 = NULL;
+ p->mp_fld_load_pps_rx0 = NULL;
+ p->mp_fld_load_pps_rx1 = NULL;
+ p->mp_fld_load_pps_tx0 = NULL;
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+ }
+
+ /* Params */
+ p->m_nb_nim_ports = nthw_fpga_get_product_param(p_fpga, NT_NIMS, 0);
+ p->m_nb_phy_ports = nthw_fpga_get_product_param(p_fpga, NT_PHY_PORTS, 0);
+
+ /* VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_STA_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_PORTS, 0);
+ }
+ }
+
+ p->m_nb_rpp_per_ps = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+
+ p->m_nb_tx_ports = nthw_fpga_get_product_param(p_fpga, NT_TX_PORTS, 0);
+ p->m_rx_port_replicate = nthw_fpga_get_product_param(p_fpga, NT_RX_PORT_REPLICATE, 0);
+
+ /* VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_STA_COLORS, 64) * 2;
+
+ if (p->m_nb_color_counters == 0) {
+ /* non-VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_CAT_FUNCS, 0) * 2;
+ }
+
+ p->m_nb_rx_host_buffers = nthw_fpga_get_product_param(p_fpga, NT_QUEUES, 0);
+ p->m_nb_tx_host_buffers = p->m_nb_rx_host_buffers;
+
+ p->m_dbs_present = nthw_fpga_get_product_param(p_fpga, NT_DBS_PRESENT, 0);
+
+ p->m_nb_rx_hb_counters = (p->m_nb_rx_host_buffers * (6 + 2 *
+ (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ?
+ p->m_dbs_present : 0)));
+
+ p->m_nb_tx_hb_counters = 0;
+
+ p->m_nb_rx_port_counters = 42 +
+ 2 * (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ? p->m_dbs_present : 0);
+ p->m_nb_tx_port_counters = 0;
+
+ p->m_nb_counters =
+ p->m_nb_color_counters + p->m_nb_rx_hb_counters + p->m_nb_tx_hb_counters;
+
+ p->mn_stat_layout_version = 0;
+
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 9)) {
+ p->mn_stat_layout_version = 7;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 8)) {
+ p->mn_stat_layout_version = 6;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->mn_stat_layout_version = 5;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 4)) {
+ p->mn_stat_layout_version = 4;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 3)) {
+ p->mn_stat_layout_version = 3;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 2)) {
+ p->mn_stat_layout_version = 2;
+
+ } else if (n_module_version_packed64 > VERSION_PACKED64(0, 0)) {
+ p->mn_stat_layout_version = 1;
+
+ } else {
+ p->mn_stat_layout_version = 0;
+ NT_LOG(ERR, NTHW, "%s: unknown module_version 0x%08lX layout=%d",
+ p_adapter_id_str, n_module_version_packed64, p->mn_stat_layout_version);
+ }
+
+ assert(p->mn_stat_layout_version);
+
+ /* STA module 0.2+ adds IPF counters per port (Rx feature) */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 2))
+ p->m_nb_rx_port_counters += 6;
+
+ /* STA module 0.3+ adds TX stats */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3) || p->m_nb_tx_ports >= 1)
+ p->mb_has_tx_stats = true;
+
+ /* STA module 0.3+ adds TX stat counters */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3))
+ p->m_nb_tx_port_counters += 22;
+
+ /* STA module 0.4+ adds TX drop event counter */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 4))
+ p->m_nb_tx_port_counters += 1; /* TX drop event counter */
+
+ /*
+ * STA module 0.6+ adds pkt filter drop octets+pkts, retransmit and
+ * duplicate counters
+ */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->m_nb_rx_port_counters += 4;
+ p->m_nb_tx_port_counters += 1;
+ }
+
+ p->m_nb_counters += (p->m_nb_rx_ports * p->m_nb_rx_port_counters);
+
+ if (p->mb_has_tx_stats)
+ p->m_nb_counters += (p->m_nb_tx_ports * p->m_nb_tx_port_counters);
+
+ /* Output params (debug) */
+ NT_LOG(DBG, NTHW, "%s: nims=%d rxports=%d txports=%d rxrepl=%d colors=%d queues=%d",
+ p_adapter_id_str, p->m_nb_nim_ports, p->m_nb_rx_ports, p->m_nb_tx_ports,
+ p->m_rx_port_replicate, p->m_nb_color_counters, p->m_nb_rx_host_buffers);
+ NT_LOG(DBG, NTHW, "%s: hbs=%d hbcounters=%d rxcounters=%d txcounters=%d",
+ p_adapter_id_str, p->m_nb_rx_host_buffers, p->m_nb_rx_hb_counters,
+ p->m_nb_rx_port_counters, p->m_nb_tx_port_counters);
+ NT_LOG(DBG, NTHW, "%s: layout=%d", p_adapter_id_str, p->mn_stat_layout_version);
+ NT_LOG(DBG, NTHW, "%s: counters=%d (0x%X)", p_adapter_id_str, p->m_nb_counters,
+ p->m_nb_counters);
+
+ /* Init */
+ if (p->mp_fld_tx_disable)
+ nthw_field_set_flush(p->mp_fld_tx_disable);
+
+ nthw_field_update_register(p->mp_fld_cnt_clear);
+ nthw_field_set_flush(p->mp_fld_cnt_clear);
+ nthw_field_clr_flush(p->mp_fld_cnt_clear);
+
+ nthw_field_update_register(p->mp_fld_stat_toggle_missed);
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_clr_flush(p->mp_fld_dma_ena);
+ nthw_field_update_register(p->mp_fld_dma_ena);
+
+ /* Set the sliding windows size for port load */
+ if (p->mp_fld_load_bin) {
+ uint32_t rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ uint32_t bin =
+ (uint32_t)(((PORT_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) -
+ 1ULL);
+ nthw_field_set_val_flush32(p->mp_fld_load_bin, bin);
+ }
+
+ return 0;
+}
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual)
+{
+ assert(p_stat_dma_virtual);
+ p->mp_timestamp = NULL;
+
+ p->m_stat_dma_physical = stat_dma_physical;
+ p->mp_stat_dma_virtual = p_stat_dma_virtual;
+
+ memset(p->mp_stat_dma_virtual, 0, (p->m_nb_counters * sizeof(uint32_t)));
+
+ nthw_field_set_val_flush32(p->mp_fld_dma_msb,
+ (uint32_t)((p->m_stat_dma_physical >> 32) & 0xffffffff));
+ nthw_field_set_val_flush32(p->mp_fld_dma_lsb,
+ (uint32_t)(p->m_stat_dma_physical & 0xffffffff));
+
+ p->mp_timestamp = (uint64_t *)(p->mp_stat_dma_virtual + p->m_nb_counters);
+ NT_LOG(DBG, NTHW,
+ "stat_dma_physical=%" PRIX64 " p_stat_dma_virtual=%" PRIX64
+ " mp_timestamp=%" PRIX64 "", p->m_stat_dma_physical,
+ (uint64_t)p->mp_stat_dma_virtual, (uint64_t)p->mp_timestamp);
+ *p->mp_timestamp = (uint64_t)(int64_t)-1;
+ return 0;
+}
+
+int nthw_stat_trigger(nthw_stat_t *p)
+{
+ int n_toggle_miss = nthw_field_get_updated(p->mp_fld_stat_toggle_missed);
+
+ if (n_toggle_miss)
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ if (p->mp_timestamp)
+ *p->mp_timestamp = -1; /* Clear old ts */
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_set_flush(p->mp_fld_dma_ena);
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 2b059d98ff..ddc144dc02 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -46,6 +46,7 @@
#define MOD_SDC (0xd2369530UL)
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
+#define MOD_STA (0x76fae64dUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7741aa563f..8f196f885f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -45,6 +45,7 @@
#include "nthw_fpga_reg_defs_sdc.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
+#include "nthw_fpga_reg_defs_sta.h"
#include "nthw_fpga_reg_defs_tx_cpy.h"
#include "nthw_fpga_reg_defs_tx_ins.h"
#include "nthw_fpga_reg_defs_tx_rpl.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
new file mode 100644
index 0000000000..640ffcbc52
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -0,0 +1,40 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_STA_
+#define _NTHW_FPGA_REG_DEFS_STA_
+
+/* STA */
+#define STA_CFG (0xcecaf9f4UL)
+#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
+#define STA_CFG_CNT_FRZ (0x8c27a596UL)
+#define STA_CFG_DMA_ENA (0x940dbacUL)
+#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_HOST_ADR_LSB (0xde569336UL)
+#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
+#define STA_HOST_ADR_MSB (0xdf94f901UL)
+#define STA_HOST_ADR_MSB_MSB (0x114798c8UL)
+#define STA_LOAD_BIN (0x2e842591UL)
+#define STA_LOAD_BIN_BIN (0x1a2b942eUL)
+#define STA_LOAD_BPS_RX_0 (0xbf8f4595UL)
+#define STA_LOAD_BPS_RX_0_BPS (0x41647781UL)
+#define STA_LOAD_BPS_RX_1 (0xc8887503UL)
+#define STA_LOAD_BPS_RX_1_BPS (0x7c045e31UL)
+#define STA_LOAD_BPS_TX_0 (0x9ae41a49UL)
+#define STA_LOAD_BPS_TX_0_BPS (0x870b7e06UL)
+#define STA_LOAD_BPS_TX_1 (0xede32adfUL)
+#define STA_LOAD_BPS_TX_1_BPS (0xba6b57b6UL)
+#define STA_LOAD_PPS_RX_0 (0x811173c3UL)
+#define STA_LOAD_PPS_RX_0_PPS (0xbee573fcUL)
+#define STA_LOAD_PPS_RX_1 (0xf6164355UL)
+#define STA_LOAD_PPS_RX_1_PPS (0x83855a4cUL)
+#define STA_LOAD_PPS_TX_0 (0xa47a2c1fUL)
+#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
+#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
+#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_STATUS (0x91c5c51cUL)
+#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_STA_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 1b25621537..86876ecda6 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -65,6 +65,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+uint64_t rte_tsc_freq;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -88,7 +90,7 @@ static const struct rte_pci_id nthw_pci_id_map[] = {
static const struct sg_ops_s *sg_ops;
-static rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
+rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
/*
* Store and get adapter info
@@ -156,6 +158,102 @@ get_pdrv_from_pci(struct rte_pci_addr addr)
return p_drv;
}
+static int dpdk_stats_collect(struct pmd_internals *internals, struct rte_eth_stats *stats)
+{
+ const struct ntnic_filter_ops *ntnic_filter_ops = get_ntnic_filter_ops();
+
+ if (ntnic_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "ntnic_filter_ops uninitialized");
+ return -1;
+ }
+
+ unsigned int i;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t rx_total = 0;
+ uint64_t rx_total_b = 0;
+ uint64_t tx_total = 0;
+ uint64_t tx_total_b = 0;
+ uint64_t tx_err_total = 0;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || !stats || if_index < 0 ||
+ if_index > NUM_ADAPTER_PORTS_MAX) {
+ NT_LOG_DBGX(WRN, NTNIC, "error exit");
+ return -1;
+ }
+
+ /*
+ * Pull the latest port statistic numbers (Rx/Tx pkts and bytes)
+ * Return values are in the "internals->rxq_scg[]" and "internals->txq_scg[]" arrays
+ */
+ ntnic_filter_ops->poll_statistics(internals);
+
+ memset(stats, 0, sizeof(*stats));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_rx_queues; i++) {
+ stats->q_ipackets[i] = internals->rxq_scg[i].rx_pkts;
+ stats->q_ibytes[i] = internals->rxq_scg[i].rx_bytes;
+ rx_total += stats->q_ipackets[i];
+ rx_total_b += stats->q_ibytes[i];
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_tx_queues; i++) {
+ stats->q_opackets[i] = internals->txq_scg[i].tx_pkts;
+ stats->q_obytes[i] = internals->txq_scg[i].tx_bytes;
+ stats->q_errors[i] = internals->txq_scg[i].err_pkts;
+ tx_total += stats->q_opackets[i];
+ tx_total_b += stats->q_obytes[i];
+ tx_err_total += stats->q_errors[i];
+ }
+
+ stats->imissed = internals->rx_missed;
+ stats->ipackets = rx_total;
+ stats->ibytes = rx_total_b;
+ stats->opackets = tx_total;
+ stats->obytes = tx_total_b;
+ stats->oerrors = tx_err_total;
+
+ return 0;
+}
+
+static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s *p_nt_drv,
+ int n_intf_no)
+{
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ unsigned int i;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /* Rx */
+ for (i = 0; i < internals->nb_rx_queues; i++) {
+ internals->rxq_scg[i].rx_pkts = 0;
+ internals->rxq_scg[i].rx_bytes = 0;
+ internals->rxq_scg[i].err_pkts = 0;
+ }
+
+ internals->rx_missed = 0;
+
+ /* Tx */
+ for (i = 0; i < internals->nb_tx_queues; i++) {
+ internals->txq_scg[i].tx_pkts = 0;
+ internals->txq_scg[i].tx_bytes = 0;
+ internals->txq_scg[i].err_pkts = 0;
+ }
+
+ p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
+
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
static int
eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
{
@@ -194,6 +292,23 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return 0;
}
+static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ dpdk_stats_collect(internals, stats);
+ return 0;
+}
+
+static int eth_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ const int if_index = internals->n_intf_no;
+ dpdk_stats_reset(internals, p_nt_drv, if_index);
+ return 0;
+}
+
static int
eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info)
{
@@ -1455,6 +1570,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_set_link_down = eth_dev_set_link_down,
.dev_close = eth_dev_close,
.link_update = eth_link_update,
+ .stats_get = eth_stats_get,
+ .stats_reset = eth_stats_reset,
.dev_infos_get = eth_dev_infos_get,
.fw_version_get = eth_fw_version_get,
.rx_queue_setup = eth_rx_scg_queue_setup,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 4962ab8d5a..e2fce02afa 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -8,11 +8,19 @@
#include "create_elements.h"
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
+#include "ntos_drv.h"
#define MAX_RTE_FLOWS 8192
+#define MAX_COLOR_FLOW_STATS 0x400
#define NT_MAX_COLOR_FLOW_STATS 0x400
+#if (MAX_COLOR_FLOW_STATS != NT_MAX_COLOR_FLOW_STATS)
+#error Difference in COLOR_FLOW_STATS. Please synchronize the defines.
+#endif
+
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
@@ -668,6 +676,9 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
/* Cleanup recorded flows */
nt_flows[flow].used = 0;
nt_flows[flow].caller_id = 0;
+ nt_flows[flow].stat_bytes = 0UL;
+ nt_flows[flow].stat_pkts = 0UL;
+ nt_flows[flow].stat_tcp_flags = 0;
}
}
@@ -707,6 +718,127 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int poll_statistics(struct pmd_internals *internals)
+{
+ int flow;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t last_stat_rtc = 0;
+
+ if (!p_nt4ga_stat || if_index < 0 || if_index > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ assert(rte_tsc_freq > 0);
+
+ rte_spinlock_lock(&hwlock);
+
+ uint64_t now_rtc = rte_get_tsc_cycles();
+
+ /*
+ * Check per port max once a second
+ * if more than a second since last stat read, do a new one
+ */
+ if ((now_rtc - internals->last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ return 0;
+ }
+
+ internals->last_stat_rtc = now_rtc;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /*
+ * Add the RX statistics increments since last time we polled.
+ * (No difference if physical or virtual port)
+ */
+ internals->rxq_scg[0].rx_pkts += p_nt4ga_stat->a_port_rx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_packets_base[if_index];
+ internals->rxq_scg[0].rx_bytes += p_nt4ga_stat->a_port_rx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_octets_base[if_index];
+ internals->rxq_scg[0].err_pkts += 0;
+ internals->rx_missed += p_nt4ga_stat->a_port_rx_drops_total[if_index] -
+ p_nt4ga_stat->a_port_rx_drops_base[if_index];
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_rx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_packets_total[if_index];
+ p_nt4ga_stat->a_port_rx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_octets_total[if_index];
+ p_nt4ga_stat->a_port_rx_drops_base[if_index] =
+ p_nt4ga_stat->a_port_rx_drops_total[if_index];
+
+ /* Tx (here we must distinguish between physical and virtual ports) */
+ if (internals->type == PORT_TYPE_PHYSICAL) {
+ /* Add the statistics increments since last time we polled */
+ internals->txq_scg[0].tx_pkts += p_nt4ga_stat->a_port_tx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_packets_base[if_index];
+ internals->txq_scg[0].tx_bytes += p_nt4ga_stat->a_port_tx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_octets_base[if_index];
+ internals->txq_scg[0].err_pkts += 0;
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_tx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_packets_total[if_index];
+ p_nt4ga_stat->a_port_tx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_octets_total[if_index];
+ }
+
+ /* Globally only once a second */
+ if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return 0;
+ }
+
+ last_stat_rtc = now_rtc;
+
+ /* All color counter are global, therefore only 1 pmd must update them */
+ const struct color_counters *p_color_counters = p_nt4ga_stat->mp_stat_structs_color;
+ struct color_counters *p_color_counters_base = p_nt4ga_stat->a_stat_structs_color_base;
+ uint64_t color_packets_accumulated, color_bytes_accumulated;
+
+ for (flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used) {
+ unsigned int color = nt_flows[flow].flow_stat_id;
+
+ if (color < NT_MAX_COLOR_FLOW_STATS) {
+ color_packets_accumulated = p_color_counters[color].color_packets;
+ nt_flows[flow].stat_pkts +=
+ (color_packets_accumulated -
+ p_color_counters_base[color].color_packets);
+
+ nt_flows[flow].stat_tcp_flags |= p_color_counters[color].tcp_flags;
+
+ color_bytes_accumulated = p_color_counters[color].color_bytes;
+ nt_flows[flow].stat_bytes +=
+ (color_bytes_accumulated -
+ p_color_counters_base[color].color_bytes);
+
+ /* Update the counter bases */
+ p_color_counters_base[color].color_packets =
+ color_packets_accumulated;
+ p_color_counters_base[color].color_bytes = color_bytes_accumulated;
+ }
+ }
+ }
+
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
+static const struct ntnic_filter_ops ntnic_filter_ops = {
+ .poll_statistics = poll_statistics,
+};
+
+void ntnic_filter_init(void)
+{
+ register_ntnic_filter_ops(&ntnic_filter_ops);
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 593b56bf5b..355e2032b1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,21 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+static const struct ntnic_filter_ops *ntnic_filter_ops;
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
+{
+ ntnic_filter_ops = ops;
+}
+
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void)
+{
+ if (ntnic_filter_ops == NULL)
+ ntnic_filter_init();
+
+ return ntnic_filter_ops;
+}
+
static struct link_ops_s *link_100g_ops;
void register_100g_link_ops(struct link_ops_s *ops)
@@ -47,6 +62,21 @@ const struct port_ops *get_port_ops(void)
return port_ops;
}
+static const struct nt4ga_stat_ops *nt4ga_stat_ops;
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops)
+{
+ nt4ga_stat_ops = ops;
+}
+
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void)
+{
+ if (nt4ga_stat_ops == NULL)
+ nt4ga_stat_ops_init();
+
+ return nt4ga_stat_ops;
+}
+
static const struct adapter_ops *adapter_ops;
void register_adapter_ops(const struct adapter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e40ed9b949..30b9afb7d3 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -111,6 +111,14 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+struct ntnic_filter_ops {
+ int (*poll_statistics)(struct pmd_internals *internals);
+};
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops);
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void);
+void ntnic_filter_init(void);
+
struct link_ops_s {
int (*link_init)(struct adapter_info_s *p_adapter_info, nthw_fpga_t *p_fpga);
};
@@ -175,6 +183,15 @@ void register_port_ops(const struct port_ops *ops);
const struct port_ops *get_port_ops(void);
void port_init(void);
+struct nt4ga_stat_ops {
+ int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+};
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void);
+void nt4ga_stat_ops_init(void);
+
struct adapter_ops {
int (*init)(struct adapter_info_s *p_adapter_info);
int (*deinit)(struct adapter_info_s *p_adapter_info);
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index a482fb43ad..f2eccf3501 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -22,6 +22,7 @@
* The windows size must max be 3 min in order to
* prevent overflow.
*/
+#define PORT_LOAD_WINDOWS_SIZE 2ULL
#define FLM_LOAD_WINDOWS_SIZE 2ULL
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 55/73] net/ntnic: add rpf module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (53 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 54/73] net/ntnic: add statistics API Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 56/73] net/ntnic: add statistics poll Serhii Iliushyk
` (18 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Receive Port FIFO module controls the small FPGA FIFO
that packets are stored in before they enter the packet processor pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 25 +++-
drivers/net/ntnic/include/ntnic_stat.h | 2 +
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +++++++
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 ++++++++++++++++++
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 ++
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +++
10 files changed, 228 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 0e20f3ea45..f733fd5459 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -11,6 +11,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nim.h"
#include "flow_filter.h"
+#include "ntnic_stat.h"
#include "ntnic_mod_reg.h"
#define DEFAULT_MAX_BPS_SPEED 100e9
@@ -43,7 +44,7 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
if (!p_nthw_rmc) {
nthw_stat_delete(p_nthw_stat);
- NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ NT_LOG(ERR, NTNIC, "%s: ERROR rmc allocation", p_adapter_id_str);
return -1;
}
@@ -54,6 +55,22 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
p_nt4ga_stat->mp_nthw_rmc = NULL;
}
+ if (nthw_rpf_init(NULL, p_fpga, p_adapter_info->adapter_no) == 0) {
+ nthw_rpf_t *p_nthw_rpf = nthw_rpf_new();
+
+ if (!p_nthw_rpf) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rpf_init(p_nthw_rpf, p_fpga, p_adapter_info->adapter_no);
+ p_nt4ga_stat->mp_nthw_rpf = p_nthw_rpf;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rpf = NULL;
+ }
+
p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
nthw_stat_init(p_nthw_stat, p_fpga, 0);
@@ -77,6 +94,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_block(p_nt4ga_stat->mp_nthw_rpf);
+
/* Allocate and map memory for fpga statistics */
{
uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
@@ -112,6 +132,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_unblock(p_nt4ga_stat->mp_nthw_rpf);
+
p_nt4ga_stat->mp_stat_structs_color =
calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 2aee3f8425..ed24a892ec 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -8,6 +8,7 @@
#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_rpf.h"
#include "nthw_fpga_model.h"
#define NT_MAX_COLOR_FLOW_STATS 0x400
@@ -102,6 +103,7 @@ struct flm_counters_v1 {
struct nt4ga_stat_s {
nthw_stat_t *mp_nthw_stat;
nthw_rmc_t *mp_nthw_rmc;
+ nthw_rpf_t *mp_nthw_rpf;
struct nt_dma_s *p_stat_dma;
uint32_t *p_stat_dma_virtual;
uint32_t n_stat_size;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 216341bb11..ed5a201fd5 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_iic.c',
'nthw/core/nthw_mac_pcs.c',
'nthw/core/nthw_pcie3.c',
+ 'nthw/core/nthw_rpf.c',
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
new file mode 100644
index 0000000000..4c6c57ba55
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -0,0 +1,48 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTHW_RPF_HPP_
+#define NTHW_RPF_HPP_
+
+#include "nthw_fpga_model.h"
+#include "pthread.h"
+struct nthw_rpf {
+ nthw_fpga_t *mp_fpga;
+
+ nthw_module_t *m_mod_rpf;
+
+ int mn_instance;
+
+ nthw_register_t *mp_reg_control;
+ nthw_field_t *mp_fld_control_pen;
+ nthw_field_t *mp_fld_control_rpp_en;
+ nthw_field_t *mp_fld_control_st_tgl_en;
+ nthw_field_t *mp_fld_control_keep_alive_en;
+
+ nthw_register_t *mp_ts_sort_prg;
+ nthw_field_t *mp_fld_ts_sort_prg_maturing_delay;
+ nthw_field_t *mp_fld_ts_sort_prg_ts_at_eof;
+
+ int m_default_maturing_delay;
+ bool m_administrative_block; /* used to enforce license expiry */
+
+ pthread_mutex_t rpf_mutex;
+};
+
+typedef struct nthw_rpf nthw_rpf_t;
+typedef struct nthw_rpf nt_rpf;
+
+nthw_rpf_t *nthw_rpf_new(void);
+void nthw_rpf_delete(nthw_rpf_t *p);
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rpf_administrative_block(nthw_rpf_t *p);
+void nthw_rpf_block(nthw_rpf_t *p);
+void nthw_rpf_unblock(nthw_rpf_t *p);
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay);
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p);
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable);
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p);
+
+#endif
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
new file mode 100644
index 0000000000..81c704d01a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -0,0 +1,119 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+#include "nthw_rpf.h"
+
+nthw_rpf_t *nthw_rpf_new(void)
+{
+ nthw_rpf_t *p = malloc(sizeof(nthw_rpf_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_rpf_t));
+
+ return p;
+}
+
+void nthw_rpf_delete(nthw_rpf_t *p)
+{
+ if (p) {
+ memset(p, 0, sizeof(nthw_rpf_t));
+ free(p);
+ }
+}
+
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *p_mod = nthw_fpga_query_module(p_fpga, MOD_RPF, n_instance);
+
+ if (p == NULL)
+ return p_mod == NULL ? -1 : 0;
+
+ if (p_mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: MOD_RPF %d: no such instance",
+ p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance);
+ return -1;
+ }
+
+ p->m_mod_rpf = p_mod;
+
+ p->mp_fpga = p_fpga;
+
+ p->m_administrative_block = false;
+
+ /* CONTROL */
+ p->mp_reg_control = nthw_module_get_register(p->m_mod_rpf, RPF_CONTROL);
+ p->mp_fld_control_pen = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_PEN);
+ p->mp_fld_control_rpp_en = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_RPP_EN);
+ p->mp_fld_control_st_tgl_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_ST_TGL_EN);
+ p->mp_fld_control_keep_alive_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_KEEP_ALIVE_EN);
+
+ /* TS_SORT_PRG */
+ p->mp_ts_sort_prg = nthw_module_get_register(p->m_mod_rpf, RPF_TS_SORT_PRG);
+ p->mp_fld_ts_sort_prg_maturing_delay =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_MATURING_DELAY);
+ p->mp_fld_ts_sort_prg_ts_at_eof =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_TS_AT_EOF);
+ p->m_default_maturing_delay =
+ nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
+
+ /* Initialize mutex */
+ pthread_mutex_init(&p->rpf_mutex, NULL);
+ return 0;
+}
+
+void nthw_rpf_administrative_block(nthw_rpf_t *p)
+{
+ /* block all MAC ports */
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+
+ p->m_administrative_block = true;
+}
+
+void nthw_rpf_block(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+}
+
+void nthw_rpf_unblock(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+
+ nthw_field_set_val32(p->mp_fld_control_pen, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_rpp_en, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_st_tgl_en, 1);
+ nthw_field_set_val_flush32(p->mp_fld_control_keep_alive_en, 1);
+}
+
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_maturing_delay, (uint32_t)delay);
+}
+
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ /* Maturing delay is a two's complement 18 bit value, so we retrieve it as signed */
+ return nthw_field_get_signed(p->mp_fld_ts_sort_prg_maturing_delay);
+}
+
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_ts_at_eof, enable);
+}
+
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p)
+{
+ return nthw_field_get_updated(p->mp_fld_ts_sort_prg_ts_at_eof);
+}
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
index 4d495f5b96..9eaaeb550d 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
@@ -1050,6 +1050,18 @@ uint32_t nthw_field_get_val32(const nthw_field_t *p)
return val;
}
+int32_t nthw_field_get_signed(const nthw_field_t *p)
+{
+ uint32_t val;
+
+ nthw_field_get_val(p, &val, 1);
+
+ if (val & (1U << nthw_field_get_bit_pos_high(p))) /* check sign */
+ val = val | ~nthw_field_get_mask(p); /* sign extension */
+
+ return (int32_t)val; /* cast to signed value */
+}
+
uint32_t nthw_field_get_updated(const nthw_field_t *p)
{
uint32_t val;
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
index 7956f0689e..d4e7ab3edd 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
@@ -227,6 +227,7 @@ void nthw_field_get_val(const nthw_field_t *p, uint32_t *p_data, uint32_t len);
void nthw_field_set_val(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
void nthw_field_set_val_flush(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
uint32_t nthw_field_get_val32(const nthw_field_t *p);
+int32_t nthw_field_get_signed(const nthw_field_t *p);
uint32_t nthw_field_get_updated(const nthw_field_t *p);
void nthw_field_update_register(const nthw_field_t *p);
void nthw_field_flush_register(const nthw_field_t *p);
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index ddc144dc02..03122acaf5 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,6 +41,7 @@
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
#define MOD_RPL (0x6de535c3UL)
+#define MOD_RPF (0x8d30dcddUL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 8f196f885f..7067f4b1d0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -39,6 +39,7 @@
#include "nthw_fpga_reg_defs_qsl.h"
#include "nthw_fpga_reg_defs_rac.h"
#include "nthw_fpga_reg_defs_rmc.h"
+#include "nthw_fpga_reg_defs_rpf.h"
#include "nthw_fpga_reg_defs_rpl.h"
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
new file mode 100644
index 0000000000..72f450b85d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_RPF_
+#define _NTHW_FPGA_REG_DEFS_RPF_
+
+/* RPF */
+#define RPF_CONTROL (0x7a5bdb50UL)
+#define RPF_CONTROL_KEEP_ALIVE_EN (0x80be3ffcUL)
+#define RPF_CONTROL_PEN (0xb23137b8UL)
+#define RPF_CONTROL_RPP_EN (0xdb51f109UL)
+#define RPF_CONTROL_ST_TGL_EN (0x45a6ecfaUL)
+#define RPF_TS_SORT_PRG (0xff1d137eUL)
+#define RPF_TS_SORT_PRG_MATURING_DELAY (0x2a38e127UL)
+#define RPF_TS_SORT_PRG_TS_AT_EOF (0x9f27d433UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_RPF_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 56/73] net/ntnic: add statistics poll
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (54 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 55/73] net/ntnic: add rpf module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
` (17 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Mechanism which poll statistics module and update values with dma
module.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 343 ++++++++++++++++++
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 78 ++++
.../net/ntnic/nthw/core/include/nthw_rmc.h | 5 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 20 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 1 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 128 +++++++
drivers/net/ntnic/ntnic_ethdev.c | 143 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 +
9 files changed, 721 insertions(+)
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index f733fd5459..3afc5b7853 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -16,6 +16,27 @@
#define DEFAULT_MAX_BPS_SPEED 100e9
+/* Inline timestamp format s pcap 32:32 bits. Convert to nsecs */
+static inline uint64_t timestamp2ns(uint64_t ts)
+{
+ return ((ts) >> 32) * 1000000000 + ((ts) & 0xffffffff);
+}
+
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual);
+
+static int nt4ga_stat_collect(struct adapter_info_s *p_adapter_info, nt4ga_stat_t *p_nt4ga_stat)
+{
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ p_nt4ga_stat->last_timestamp = timestamp2ns(*p_nthw_stat->mp_timestamp);
+ nt4ga_stat_collect_cap_v1_stats(p_adapter_info, p_nt4ga_stat,
+ p_nt4ga_stat->p_stat_dma_virtual);
+
+ return 0;
+}
+
static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
{
const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
@@ -203,9 +224,331 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return 0;
}
+/* Called with stat mutex locked */
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual)
+{
+ (void)p_adapter_info;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL)
+ return -1;
+
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
+ const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
+ int c, h, p;
+
+ if (!p_nthw_stat || !p_nt4ga_stat)
+ return -1;
+
+ if (p_nthw_stat->mn_stat_layout_version < 6) {
+ NT_LOG(ERR, NTNIC, "HW STA module version not supported");
+ return -1;
+ }
+
+ /* RX ports */
+ for (c = 0; c < p_nthw_stat->m_nb_color_counters / 2; c++) {
+ p_nt4ga_stat->mp_stat_structs_color[c].color_packets += p_stat_dma_virtual[c * 2];
+ p_nt4ga_stat->mp_stat_structs_color[c].color_bytes +=
+ p_stat_dma_virtual[c * 2 + 1];
+ }
+
+ /* Move to Host buffer counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_color_counters;
+
+ for (h = 0; h < p_nthw_stat->m_nb_rx_host_buffers; h++) {
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_packets += p_stat_dma_virtual[h * 8];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_packets += p_stat_dma_virtual[h * 8 + 1];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_packets += p_stat_dma_virtual[h * 8 + 2];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_packets +=
+ p_stat_dma_virtual[h * 8 + 3];
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_bytes += p_stat_dma_virtual[h * 8 + 4];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_bytes += p_stat_dma_virtual[h * 8 + 5];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_bytes += p_stat_dma_virtual[h * 8 + 6];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_bytes +=
+ p_stat_dma_virtual[h * 8 + 7];
+ }
+
+ /* Move to Rx Port counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_rx_hb_counters;
+
+ /* RX ports */
+ for (p = 0; p < n_rx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 23];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].duplicate +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 24];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_ip_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 25];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_udp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 26];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_tcp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 27];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_giant_undersize +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 28];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_baby_giant +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 29];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_not_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 30];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 31];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 32];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 33];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 34];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 35];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 36];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 37];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 43];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 44];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 45];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 46];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 47]
+ : 0;
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 48];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 49];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 50];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 51];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 52];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 53];
+
+ /* Rx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41] +
+ (p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0);
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_rx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+ p_nt4ga_stat->a_port_rx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_rx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Move to Tx Port counters */
+ p_stat_dma_virtual += n_rx_ports * p_nthw_stat->m_nb_rx_port_counters;
+
+ for (p = 0; p < n_tx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 23];
+
+ /* Tx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_tx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+ p_nt4ga_stat->a_port_tx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_tx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Update and get port load counters */
+ for (p = 0; p < n_rx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ for (p = 0; p < n_tx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ return 0;
+}
+
static struct nt4ga_stat_ops ops = {
.nt4ga_stat_init = nt4ga_stat_init,
.nt4ga_stat_setup = nt4ga_stat_setup,
+ .nt4ga_stat_collect = nt4ga_stat_collect
};
void nt4ga_stat_ops_init(void)
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 1135e9a539..38e4d0ca35 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -16,6 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
+ rte_thread_t stat_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index ed24a892ec..0735dbc085 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -85,16 +85,87 @@ struct color_counters {
};
struct host_buffer_counters {
+ uint64_t flush_packets;
+ uint64_t drop_packets;
+ uint64_t fwd_packets;
+ uint64_t dbs_drop_packets;
+ uint64_t flush_bytes;
+ uint64_t drop_bytes;
+ uint64_t fwd_bytes;
+ uint64_t dbs_drop_bytes;
};
struct port_load_counters {
+ uint64_t rx_pps;
uint64_t rx_pps_max;
+ uint64_t tx_pps;
uint64_t tx_pps_max;
+ uint64_t rx_bps;
uint64_t rx_bps_max;
+ uint64_t tx_bps;
uint64_t tx_bps_max;
};
struct port_counters_v2 {
+ /* Rx/Tx common port counters */
+ uint64_t drop_events;
+ uint64_t pkts;
+ /* FPGA counters */
+ uint64_t octets;
+ uint64_t broadcast_pkts;
+ uint64_t multicast_pkts;
+ uint64_t unicast_pkts;
+ uint64_t pkts_alignment;
+ uint64_t pkts_code_violation;
+ uint64_t pkts_crc;
+ uint64_t undersize_pkts;
+ uint64_t oversize_pkts;
+ uint64_t fragments;
+ uint64_t jabbers_not_truncated;
+ uint64_t jabbers_truncated;
+ uint64_t pkts_64_octets;
+ uint64_t pkts_65_to_127_octets;
+ uint64_t pkts_128_to_255_octets;
+ uint64_t pkts_256_to_511_octets;
+ uint64_t pkts_512_to_1023_octets;
+ uint64_t pkts_1024_to_1518_octets;
+ uint64_t pkts_1519_to_2047_octets;
+ uint64_t pkts_2048_to_4095_octets;
+ uint64_t pkts_4096_to_8191_octets;
+ uint64_t pkts_8192_to_max_octets;
+ uint64_t mac_drop_events;
+ uint64_t pkts_lr;
+ /* Rx only port counters */
+ uint64_t duplicate;
+ uint64_t pkts_ip_chksum_error;
+ uint64_t pkts_udp_chksum_error;
+ uint64_t pkts_tcp_chksum_error;
+ uint64_t pkts_giant_undersize;
+ uint64_t pkts_baby_giant;
+ uint64_t pkts_not_isl_vlan_mpls;
+ uint64_t pkts_isl;
+ uint64_t pkts_vlan;
+ uint64_t pkts_isl_vlan;
+ uint64_t pkts_mpls;
+ uint64_t pkts_isl_mpls;
+ uint64_t pkts_vlan_mpls;
+ uint64_t pkts_isl_vlan_mpls;
+ uint64_t pkts_no_filter;
+ uint64_t pkts_dedup_drop;
+ uint64_t pkts_filter_drop;
+ uint64_t pkts_overflow;
+ uint64_t pkts_dbs_drop;
+ uint64_t octets_no_filter;
+ uint64_t octets_dedup_drop;
+ uint64_t octets_filter_drop;
+ uint64_t octets_overflow;
+ uint64_t octets_dbs_drop;
+ uint64_t ipft_first_hit;
+ uint64_t ipft_first_not_hit;
+ uint64_t ipft_mid_hit;
+ uint64_t ipft_mid_not_hit;
+ uint64_t ipft_last_hit;
+ uint64_t ipft_last_not_hit;
};
struct flm_counters_v1 {
@@ -147,6 +218,8 @@ struct nt4ga_stat_s {
uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_drops_total[NUM_ADAPTER_PORTS_MAX];
};
typedef struct nt4ga_stat_s nt4ga_stat_t;
@@ -159,4 +232,9 @@ int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
uint32_t *p_stat_dma_virtual);
int nthw_stat_trigger(nthw_stat_t *p);
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index b239752674..9c40804cd9 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -47,4 +47,9 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p);
+
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 748519aeb4..570a179fc8 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,26 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_sf_ram_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_descr_fifo_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p)
+{
+ return (p->mp_reg_dbg) ? nthw_field_get_updated(p->mp_fld_dbg_merge) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p)
+{
+ return (p->mp_reg_mac_if) ? nthw_field_get_updated(p->mp_fld_mac_if_err) : 0xffffffff;
+}
+
void nthw_rmc_block(nthw_rmc_t *p)
{
/* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d61044402d..aac3144cc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
+#include "ntlog.h"
#include "ntnic_mod_reg.h"
#include "flow_api.h"
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
index 6adcd2e090..078eec5e1f 100644
--- a/drivers/net/ntnic/nthw/stat/nthw_stat.c
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -368,3 +368,131 @@ int nthw_stat_trigger(nthw_stat_t *p)
return 0;
}
+
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 86876ecda6..f94340f489 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -4,6 +4,9 @@
*/
#include <stdint.h>
+#include <stdarg.h>
+
+#include <signal.h>
#include <rte_eal.h>
#include <rte_dev.h>
@@ -25,6 +28,7 @@
#include "nt_util.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
#define THREAD_JOIN(a) rte_thread_join(a, NULL)
#define THREAD_FUNC static uint32_t
@@ -67,6 +71,9 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
uint64_t rte_tsc_freq;
+static void (*previous_handler)(int sig);
+static rte_thread_t shutdown_tid;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -1407,6 +1414,7 @@ drv_deinit(struct drv_s *p_drv)
/* stop statistics threads */
p_drv->ntdrv.b_shutdown = true;
+ THREAD_JOIN(p_nt_drv->stat_thread);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
@@ -1628,6 +1636,87 @@ THREAD_FUNC adapter_flm_update_thread_fn(void *context)
return THREAD_RETURN;
}
+/*
+ * Adapter stat thread
+ */
+THREAD_FUNC adapter_stat_thread_fn(void *context)
+{
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
+
+ if (nt4ga_stat_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "Statistics module uninitialized");
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const char *const p_adapter_id_str = p_nt_drv->adapter_info.mp_adapter_id_str;
+ (void)p_adapter_id_str;
+
+ if (!p_nthw_stat)
+ return THREAD_RETURN;
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: begin", p_adapter_id_str);
+
+ assert(p_nthw_stat);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ nt_os_wait_usec(10 * 1000);
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ uint32_t loop = 0;
+
+ while ((!p_drv->ntdrv.b_shutdown) &&
+ (*p_nthw_stat->mp_timestamp == (uint64_t)-1)) {
+ nt_os_wait_usec(1 * 100);
+
+ if (rte_log_get_level(nt_log_ntnic) == RTE_LOG_DEBUG &&
+ (++loop & 0x3fff) == 0) {
+ if (p_nt4ga_stat->mp_nthw_rpf) {
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+
+ } else if (p_nt4ga_stat->mp_nthw_rmc) {
+ uint32_t sf_ram_of =
+ nthw_rmc_get_status_sf_ram_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+ uint32_t descr_fifo_of =
+ nthw_rmc_get_status_descr_fifo_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+
+ uint32_t dbg_merge =
+ nthw_rmc_get_dbg_merge(p_nt4ga_stat->mp_nthw_rmc);
+ uint32_t mac_if_err =
+ nthw_rmc_get_mac_if_err(p_nt4ga_stat->mp_nthw_rmc);
+
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+ NT_LOG(ERR, NTNIC, "SF RAM Overflow : %08x",
+ sf_ram_of);
+ NT_LOG(ERR, NTNIC, "Descr Fifo Overflow : %08x",
+ descr_fifo_of);
+ NT_LOG(ERR, NTNIC, "DBG Merge : %08x",
+ dbg_merge);
+ NT_LOG(ERR, NTNIC, "MAC If Errors : %08x",
+ mac_if_err);
+ }
+ }
+ }
+
+ /* Check then collect */
+ {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ }
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: end", p_adapter_id_str);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1885,6 +1974,16 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
+ pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
+ (void *)p_drv);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
@@ -2075,6 +2174,48 @@ nthw_pci_dev_deinit(struct rte_eth_dev *eth_dev __rte_unused)
return 0;
}
+static void signal_handler_func_int(int sig)
+{
+ if (sig != SIGINT) {
+ signal(sig, previous_handler);
+ raise(sig);
+ return;
+ }
+
+ kill_pmd = 1;
+}
+
+THREAD_FUNC shutdown_thread(void *arg __rte_unused)
+{
+ while (!kill_pmd)
+ nt_os_wait_usec(100 * 1000);
+
+ NT_LOG_DBGX(DBG, NTNIC, "Shutting down because of ctrl+C");
+
+ signal(SIGINT, previous_handler);
+ raise(SIGINT);
+
+ return THREAD_RETURN;
+}
+
+static int init_shutdown(void)
+{
+ NT_LOG(DBG, NTNIC, "Starting shutdown handler");
+ kill_pmd = 0;
+ previous_handler = signal(SIGINT, signal_handler_func_int);
+ THREAD_CREATE(&shutdown_tid, shutdown_thread, NULL);
+
+ /*
+ * 1 time calculation of 1 sec stat update rtc cycles to prevent stat poll
+ * flooding by OVS from multiple virtual port threads - no need to be precise
+ */
+ uint64_t now_rtc = rte_get_tsc_cycles();
+ nt_os_wait_usec(10 * 1000);
+ rte_tsc_freq = 100 * (rte_get_tsc_cycles() - now_rtc);
+
+ return 0;
+}
+
static int
nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
@@ -2117,6 +2258,8 @@ nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
ret = nthw_pci_dev_init(pci_dev);
+ init_shutdown();
+
NT_LOG_DBGX(DBG, NTNIC, "leave: ret=%d", ret);
return ret;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 30b9afb7d3..8b825d8c48 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -186,6 +186,8 @@ void port_init(void);
struct nt4ga_stat_ops {
int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_collect)(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat);
};
void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 57/73] net/ntnic: added flm stat interface
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (55 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 56/73] net/ntnic: add statistics poll Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 58/73] net/ntnic: add tsm module Serhii Iliushyk
` (16 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flm stat module interface was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 2 ++
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 ++
4 files changed, 16 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 4a1525f237..ed96f77bc0 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -233,4 +233,6 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_filter.h b/drivers/net/ntnic/include/flow_filter.h
index d204c0d882..01777f8c9f 100644
--- a/drivers/net/ntnic/include/flow_filter.h
+++ b/drivers/net/ntnic/include/flow_filter.h
@@ -11,5 +11,6 @@
int flow_filter_init(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device, int adapter_no);
int flow_filter_done(struct flow_nic_dev *dev);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
#endif /* __FLOW_FILTER_HPP__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index aac3144cc0..e953fc1a12 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,16 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ (void)ndev;
+ (void)data;
+ (void)size;
+
+ NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
+ return -1;
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
@@ -1062,6 +1072,7 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+ .flow_get_flm_stats = flow_get_flm_stats,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8b825d8c48..8703d478b6 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -336,6 +336,8 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
/*
* Other
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 58/73] net/ntnic: add tsm module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (56 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 59/73] net/ntnic: add STA module Serhii Iliushyk
` (15 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
tsm module which operate with timers
in the physical nic was added.
Necessary defines and implementation were added.
The Time Stamp Module controls every aspect of packet timestamping,
including time synchronization, time stamp format, PTP protocol, etc.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 ++++++
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +++++
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 28 +++
7 files changed, 301 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index ed5a201fd5..a6c4fec0be 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -41,6 +41,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
'nthw/core/nthw_gmf.c',
+ 'nthw/core/nthw_tsm.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_tsm.h b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
new file mode 100644
index 0000000000..0a3bcdcaf5
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_TSM_H__
+#define __NTHW_TSM_H__
+
+#include "stdint.h"
+
+#include "nthw_fpga_model.h"
+
+struct nthw_tsm {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_tsm;
+ int mn_instance;
+
+ nthw_field_t *mp_fld_config_ts_format;
+
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t0;
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t1;
+
+ nthw_field_t *mp_fld_timer_timer_t0_max_count;
+
+ nthw_field_t *mp_fld_timer_timer_t1_max_count;
+
+ nthw_register_t *mp_reg_ts_lo;
+ nthw_field_t *mp_fld_ts_lo;
+
+ nthw_register_t *mp_reg_ts_hi;
+ nthw_field_t *mp_fld_ts_hi;
+
+ nthw_register_t *mp_reg_time_lo;
+ nthw_field_t *mp_fld_time_lo;
+
+ nthw_register_t *mp_reg_time_hi;
+ nthw_field_t *mp_fld_time_hi;
+};
+
+typedef struct nthw_tsm nthw_tsm_t;
+typedef struct nthw_tsm nthw_tsm;
+
+nthw_tsm_t *nthw_tsm_new(void);
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts);
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time);
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val);
+
+#endif /* __NTHW_TSM_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_fpga.c b/drivers/net/ntnic/nthw/core/nthw_fpga.c
index 9448c29de1..ca69a9d5b1 100644
--- a/drivers/net/ntnic/nthw/core/nthw_fpga.c
+++ b/drivers/net/ntnic/nthw/core/nthw_fpga.c
@@ -13,6 +13,8 @@
#include "nthw_fpga_instances.h"
#include "nthw_fpga_mod_str_map.h"
+#include "nthw_tsm.h"
+
#include <arpa/inet.h>
int nthw_fpga_get_param_info(struct fpga_info_s *p_fpga_info, nthw_fpga_t *p_fpga)
@@ -179,6 +181,7 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
nthw_hif_t *p_nthw_hif = NULL;
nthw_pcie3_t *p_nthw_pcie3 = NULL;
nthw_rac_t *p_nthw_rac = NULL;
+ nthw_tsm_t *p_nthw_tsm = NULL;
mcu_info_t *p_mcu_info = &p_fpga_info->mcu_info;
uint64_t n_fpga_ident = 0;
@@ -331,6 +334,50 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
p_fpga_info->mp_nthw_hif = p_nthw_hif;
+ p_nthw_tsm = nthw_tsm_new();
+
+ if (p_nthw_tsm) {
+ nthw_tsm_init(p_nthw_tsm, p_fpga, 0);
+
+ nthw_tsm_set_config_ts_format(p_nthw_tsm, 1); /* 1 = TSM: TS format native */
+
+ /* Timer T0 - stat toggle timer */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t0_max_count(p_nthw_tsm, 50 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, true);
+
+ /* Timer T1 - keep alive timer */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t1_max_count(p_nthw_tsm, 100 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, true);
+ }
+
+ p_fpga_info->mp_nthw_tsm = p_nthw_tsm;
+
+ /* TSM sample triggering: test validation... */
+#if defined(DEBUG) && (1)
+ {
+ uint64_t n_time, n_ts;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (p_nthw_hif)
+ nthw_hif_trigger_sample_time(p_nthw_hif);
+
+ else if (p_nthw_pcie3)
+ nthw_pcie3_trigger_sample_time(p_nthw_pcie3);
+
+ nthw_tsm_get_time(p_nthw_tsm, &n_time);
+ nthw_tsm_get_ts(p_nthw_tsm, &n_ts);
+
+ NT_LOG(DBG, NTHW, "%s: TSM time: %016" PRIX64 " %016" PRIX64 "\n",
+ p_adapter_id_str, n_time, n_ts);
+
+ nt_os_wait_usec(1000);
+ }
+ }
+#endif
+
return res;
}
diff --git a/drivers/net/ntnic/nthw/core/nthw_tsm.c b/drivers/net/ntnic/nthw/core/nthw_tsm.c
new file mode 100644
index 0000000000..b88dcb9b0b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_tsm.c
@@ -0,0 +1,167 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_tsm.h"
+
+nthw_tsm_t *nthw_tsm_new(void)
+{
+ nthw_tsm_t *p = malloc(sizeof(nthw_tsm_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_tsm_t));
+
+ return p;
+}
+
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_TSM, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: TSM %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_tsm = mod;
+
+ {
+ nthw_register_t *p_reg;
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_CONFIG);
+ p->mp_fld_config_ts_format = nthw_register_get_field(p_reg, TSM_CONFIG_TS_FORMAT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_CTRL);
+ p->mp_fld_timer_ctrl_timer_en_t0 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T0);
+ p->mp_fld_timer_ctrl_timer_en_t1 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T1);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T0);
+ p->mp_fld_timer_timer_t0_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T0_MAX_COUNT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T1);
+ p->mp_fld_timer_timer_t1_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T1_MAX_COUNT);
+
+ p->mp_reg_time_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_LO);
+ p_reg = p->mp_reg_time_lo;
+ p->mp_fld_time_lo = nthw_register_get_field(p_reg, TSM_TIME_LO_NS);
+
+ p->mp_reg_time_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_HI);
+ p_reg = p->mp_reg_time_hi;
+ p->mp_fld_time_hi = nthw_register_get_field(p_reg, TSM_TIME_HI_SEC);
+
+ p->mp_reg_ts_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_LO);
+ p_reg = p->mp_reg_ts_lo;
+ p->mp_fld_ts_lo = nthw_register_get_field(p_reg, TSM_TS_LO_TIME);
+
+ p->mp_reg_ts_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_HI);
+ p_reg = p->mp_reg_ts_hi;
+ p->mp_fld_ts_hi = nthw_register_get_field(p_reg, TSM_TS_HI_TIME);
+ }
+ return 0;
+}
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts)
+{
+ uint32_t n_ts_lo, n_ts_hi;
+ uint64_t val;
+
+ if (!p_ts)
+ return -1;
+
+ n_ts_lo = nthw_field_get_updated(p->mp_fld_ts_lo);
+ n_ts_hi = nthw_field_get_updated(p->mp_fld_ts_hi);
+
+ val = ((((uint64_t)n_ts_hi) << 32UL) | n_ts_lo);
+
+ if (p_ts)
+ *p_ts = val;
+
+ return 0;
+}
+
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time)
+{
+ uint32_t n_time_lo, n_time_hi;
+ uint64_t val;
+
+ if (!p_time)
+ return -1;
+
+ n_time_lo = nthw_field_get_updated(p->mp_fld_time_lo);
+ n_time_hi = nthw_field_get_updated(p->mp_fld_time_hi);
+
+ val = ((((uint64_t)n_time_hi) << 32UL) | n_time_lo);
+
+ if (p_time)
+ *p_time = val;
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T0 - stat toggle timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t0_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t0_max_count,
+ n_timer_val); /* ns (50*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T1 - keep alive timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t1_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t1_max_count,
+ n_timer_val); /* ns (100*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val)
+{
+ nthw_field_update_register(p->mp_fld_config_ts_format);
+ /* 0x1: Native - 10ns units, start date: 1970-01-01. */
+ nthw_field_set_val_flush32(p->mp_fld_config_ts_format, n_val);
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 03122acaf5..e6ed9e714b 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -48,6 +48,7 @@
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_STA (0x76fae64dUL)
+#define MOD_TSM (0x35422a24UL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7067f4b1d0..4d299c6aa8 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -44,6 +44,7 @@
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
#include "nthw_fpga_reg_defs_sdc.h"
+#include "nthw_fpga_reg_defs_tsm.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
#include "nthw_fpga_reg_defs_sta.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
new file mode 100644
index 0000000000..a087850aa4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_TSM_
+#define _NTHW_FPGA_REG_DEFS_TSM_
+
+/* TSM */
+#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_TIMER_CTRL (0x648da051UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
+#define TSM_TIMER_T0 (0x417217a5UL)
+#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
+#define TSM_TIMER_T1 (0x36752733UL)
+#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HI (0x175acea1UL)
+#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
+#define TSM_TIME_LO (0x9a55ae90UL)
+#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TS_HI (0xccfe9e5eUL)
+#define TSM_TS_HI_TIME (0xc23fed30UL)
+#define TSM_TS_LO (0x41f1fe6fUL)
+#define TSM_TS_LO_TIME (0xe0292a3eUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 59/73] net/ntnic: add STA module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (57 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 58/73] net/ntnic: add tsm module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 60/73] net/ntnic: add TSM module Serhii Iliushyk
` (14 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with STA module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 92 ++++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 8 ++
3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index a3d9f94fc6..efdb084cd6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2486,6 +2486,95 @@ static nthw_fpga_register_init_s slc_registers[] = {
{ SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
};
+static nthw_fpga_field_init_s sta_byte_fields[] = {
+ { STA_BYTE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_cfg_fields[] = {
+ { STA_CFG_CNT_CLEAR, 1, 1, 0 },
+ { STA_CFG_DMA_ENA, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_cv_err_fields[] = {
+ { STA_CV_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_fcs_err_fields[] = {
+ { STA_FCS_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_lsb_fields[] = {
+ { STA_HOST_ADR_LSB_LSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_msb_fields[] = {
+ { STA_HOST_ADR_MSB_MSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_load_bin_fields[] = {
+ { STA_LOAD_BIN_BIN, 32, 0, 8388607 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_0_fields[] = {
+ { STA_LOAD_BPS_RX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_1_fields[] = {
+ { STA_LOAD_BPS_RX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_0_fields[] = {
+ { STA_LOAD_BPS_TX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_1_fields[] = {
+ { STA_LOAD_BPS_TX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_0_fields[] = {
+ { STA_LOAD_PPS_RX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_1_fields[] = {
+ { STA_LOAD_PPS_RX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_0_fields[] = {
+ { STA_LOAD_PPS_TX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_1_fields[] = {
+ { STA_LOAD_PPS_TX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_pckt_fields[] = {
+ { STA_PCKT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_status_fields[] = {
+ { STA_STATUS_STAT_TOGGLE_MISSED, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s sta_registers[] = {
+ { STA_BYTE, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_byte_fields },
+ { STA_CFG, 0, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, sta_cfg_fields },
+ { STA_CV_ERR, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_cv_err_fields },
+ { STA_FCS_ERR, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_fcs_err_fields },
+ { STA_HOST_ADR_LSB, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_lsb_fields },
+ { STA_HOST_ADR_MSB, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_msb_fields },
+ { STA_LOAD_BIN, 8, 32, NTHW_FPGA_REG_TYPE_WO, 8388607, 1, sta_load_bin_fields },
+ { STA_LOAD_BPS_RX_0, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_0_fields },
+ { STA_LOAD_BPS_RX_1, 13, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_1_fields },
+ { STA_LOAD_BPS_TX_0, 15, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_0_fields },
+ { STA_LOAD_BPS_TX_1, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_1_fields },
+ { STA_LOAD_PPS_RX_0, 10, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_0_fields },
+ { STA_LOAD_PPS_RX_1, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_1_fields },
+ { STA_LOAD_PPS_TX_0, 14, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_0_fields },
+ { STA_LOAD_PPS_TX_1, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_1_fields },
+ { STA_PCKT, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_pckt_fields },
+ { STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2537,6 +2626,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
+ { MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2695,5 +2785,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index 150b9dd976..a2ab266931 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -19,5 +19,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RAC, "RAC" },
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
+ { MOD_STA, "STA" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
index 640ffcbc52..0cd183fcaa 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -7,11 +7,17 @@
#define _NTHW_FPGA_REG_DEFS_STA_
/* STA */
+#define STA_BYTE (0xa08364d4UL)
+#define STA_BYTE_CNT (0x3119e6bcUL)
#define STA_CFG (0xcecaf9f4UL)
#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
#define STA_CFG_CNT_FRZ (0x8c27a596UL)
#define STA_CFG_DMA_ENA (0x940dbacUL)
#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_CV_ERR (0x7db7db5dUL)
+#define STA_CV_ERR_CNT (0x2c02fbbeUL)
+#define STA_FCS_ERR (0xa0de1647UL)
+#define STA_FCS_ERR_CNT (0xc68c37d1UL)
#define STA_HOST_ADR_LSB (0xde569336UL)
#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
#define STA_HOST_ADR_MSB (0xdf94f901UL)
@@ -34,6 +40,8 @@
#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_PCKT (0xecc8f30aUL)
+#define STA_PCKT_CNT (0x63291d16UL)
#define STA_STATUS (0x91c5c51cUL)
#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 60/73] net/ntnic: add TSM module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (58 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 59/73] net/ntnic: add STA module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 61/73] net/ntnic: add xstats Serhii Iliushyk
` (13 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with tsm module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../supported/nthw_fpga_9563_055_049_0000.c | 394 +++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 177 ++++++++
4 files changed, 572 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e5d5abd0ed..64351bcdc7 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,6 +12,7 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
+Basic stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efdb084cd6..620968ceb6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2575,6 +2575,397 @@ static nthw_fpga_register_init_s sta_registers[] = {
{ STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
};
+static nthw_fpga_field_init_s tsm_con0_config_fields[] = {
+ { TSM_CON0_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON0_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON0_CONFIG_PORT, 3, 0, 0 }, { TSM_CON0_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON0_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_interface_fields[] = {
+ { TSM_CON0_INTERFACE_EX_TERM, 2, 0, 3 }, { TSM_CON0_INTERFACE_IN_REF_PWM, 8, 12, 128 },
+ { TSM_CON0_INTERFACE_PWM_ENA, 1, 2, 0 }, { TSM_CON0_INTERFACE_RESERVED, 1, 3, 0 },
+ { TSM_CON0_INTERFACE_VTERM_PWM, 8, 4, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_hi_fields[] = {
+ { TSM_CON0_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_lo_fields[] = {
+ { TSM_CON0_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_config_fields[] = {
+ { TSM_CON1_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON1_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON1_CONFIG_PORT, 3, 0, 0 }, { TSM_CON1_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON1_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_hi_fields[] = {
+ { TSM_CON1_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_lo_fields[] = {
+ { TSM_CON1_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_config_fields[] = {
+ { TSM_CON2_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON2_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON2_CONFIG_PORT, 3, 0, 0 }, { TSM_CON2_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON2_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_hi_fields[] = {
+ { TSM_CON2_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_lo_fields[] = {
+ { TSM_CON2_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_config_fields[] = {
+ { TSM_CON3_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON3_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON3_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_hi_fields[] = {
+ { TSM_CON3_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_lo_fields[] = {
+ { TSM_CON3_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_config_fields[] = {
+ { TSM_CON4_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON4_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON4_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_hi_fields[] = {
+ { TSM_CON4_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_lo_fields[] = {
+ { TSM_CON4_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_config_fields[] = {
+ { TSM_CON5_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON5_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON5_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_hi_fields[] = {
+ { TSM_CON5_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_lo_fields[] = {
+ { TSM_CON5_SAMPLE_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_config_fields[] = {
+ { TSM_CON6_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON6_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON6_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_hi_fields[] = {
+ { TSM_CON6_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_lo_fields[] = {
+ { TSM_CON6_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_hi_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_lo_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_config_fields[] = {
+ { TSM_CONFIG_NTTS_SRC, 2, 5, 0 }, { TSM_CONFIG_NTTS_SYNC, 1, 4, 0 },
+ { TSM_CONFIG_TIMESET_EDGE, 2, 8, 1 }, { TSM_CONFIG_TIMESET_SRC, 3, 10, 0 },
+ { TSM_CONFIG_TIMESET_UP, 1, 7, 0 }, { TSM_CONFIG_TS_FORMAT, 4, 0, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_int_config_fields[] = {
+ { TSM_INT_CONFIG_AUTO_DISABLE, 1, 0, 0 },
+ { TSM_INT_CONFIG_MASK, 19, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_int_stat_fields[] = {
+ { TSM_INT_STAT_CAUSE, 19, 1, 0 },
+ { TSM_INT_STAT_ENABLE, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_led_fields[] = {
+ { TSM_LED_LED0_BG_COLOR, 2, 3, 0 }, { TSM_LED_LED0_COLOR, 2, 1, 0 },
+ { TSM_LED_LED0_MODE, 1, 0, 0 }, { TSM_LED_LED0_SRC, 4, 5, 0 },
+ { TSM_LED_LED1_BG_COLOR, 2, 12, 0 }, { TSM_LED_LED1_COLOR, 2, 10, 0 },
+ { TSM_LED_LED1_MODE, 1, 9, 0 }, { TSM_LED_LED1_SRC, 4, 14, 1 },
+ { TSM_LED_LED2_BG_COLOR, 2, 21, 0 }, { TSM_LED_LED2_COLOR, 2, 19, 0 },
+ { TSM_LED_LED2_MODE, 1, 18, 0 }, { TSM_LED_LED2_SRC, 4, 23, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_config_fields[] = {
+ { TSM_NTTS_CONFIG_AUTO_HARDSET, 1, 5, 1 },
+ { TSM_NTTS_CONFIG_EXT_CLK_ADJ, 1, 6, 0 },
+ { TSM_NTTS_CONFIG_HIGH_SAMPLE, 1, 4, 0 },
+ { TSM_NTTS_CONFIG_TS_SRC_FORMAT, 4, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ext_stat_fields[] = {
+ { TSM_NTTS_EXT_STAT_MASTER_ID, 8, 16, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_REV, 8, 24, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_STAT, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_hi_fields[] = {
+ { TSM_NTTS_LIMIT_HI_SEC, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_lo_fields[] = {
+ { TSM_NTTS_LIMIT_LO_NS, 32, 0, 100000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_offset_fields[] = {
+ { TSM_NTTS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_hi_fields[] = {
+ { TSM_NTTS_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_lo_fields[] = {
+ { TSM_NTTS_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_stat_fields[] = {
+ { TSM_NTTS_STAT_NTTS_VALID, 1, 0, 0 },
+ { TSM_NTTS_STAT_SIGNAL_LOST, 8, 1, 0 },
+ { TSM_NTTS_STAT_SYNC_LOST, 8, 9, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_hi_fields[] = {
+ { TSM_NTTS_TS_T0_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_lo_fields[] = {
+ { TSM_NTTS_TS_T0_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_offset_fields[] = {
+ { TSM_NTTS_TS_T0_OFFSET_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_ctrl_fields[] = {
+ { TSM_PB_CTRL_INSTMEM_WR, 1, 1, 0 },
+ { TSM_PB_CTRL_RST, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_instmem_fields[] = {
+ { TSM_PB_INSTMEM_MEM_ADDR, 14, 0, 0 },
+ { TSM_PB_INSTMEM_MEM_DATA, 18, 14, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_i_fields[] = {
+ { TSM_PI_CTRL_I_VAL, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_ki_fields[] = {
+ { TSM_PI_CTRL_KI_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_kp_fields[] = {
+ { TSM_PI_CTRL_KP_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_shl_fields[] = {
+ { TSM_PI_CTRL_SHL_VAL, 4, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_stat_fields[] = {
+ { TSM_STAT_HARD_SYNC, 8, 8, 0 }, { TSM_STAT_LINK_CON0, 1, 0, 0 },
+ { TSM_STAT_LINK_CON1, 1, 1, 0 }, { TSM_STAT_LINK_CON2, 1, 2, 0 },
+ { TSM_STAT_LINK_CON3, 1, 3, 0 }, { TSM_STAT_LINK_CON4, 1, 4, 0 },
+ { TSM_STAT_LINK_CON5, 1, 5, 0 }, { TSM_STAT_NTTS_INSYNC, 1, 6, 0 },
+ { TSM_STAT_PTP_MI_PRESENT, 1, 7, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_ctrl_fields[] = {
+ { TSM_TIMER_CTRL_TIMER_EN_T0, 1, 0, 0 },
+ { TSM_TIMER_CTRL_TIMER_EN_T1, 1, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t0_fields[] = {
+ { TSM_TIMER_T0_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t1_fields[] = {
+ { TSM_TIMER_T1_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_hi_fields[] = {
+ { TSM_TIME_HARDSET_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_lo_fields[] = {
+ { TSM_TIME_HARDSET_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hi_fields[] = {
+ { TSM_TIME_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_lo_fields[] = {
+ { TSM_TIME_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_rate_adj_fields[] = {
+ { TSM_TIME_RATE_ADJ_FRACTION, 29, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_hi_fields[] = {
+ { TSM_TS_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_lo_fields[] = {
+ { TSM_TS_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_offset_fields[] = {
+ { TSM_TS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_fields[] = {
+ { TSM_TS_STAT_OVERRUN, 1, 16, 0 },
+ { TSM_TS_STAT_SAMPLES, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_hi_offset_fields[] = {
+ { TSM_TS_STAT_HI_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_lo_offset_fields[] = {
+ { TSM_TS_STAT_LO_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_hi_fields[] = {
+ { TSM_TS_STAT_TAR_HI_SEC, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_lo_fields[] = {
+ { TSM_TS_STAT_TAR_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x_fields[] = {
+ { TSM_TS_STAT_X_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_hi_fields[] = {
+ { TSM_TS_STAT_X2_HI_NS, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_lo_fields[] = {
+ { TSM_TS_STAT_X2_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_utc_offset_fields[] = {
+ { TSM_UTC_OFFSET_SEC, 8, 0, 0 },
+};
+
+static nthw_fpga_register_init_s tsm_registers[] = {
+ { TSM_CON0_CONFIG, 24, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con0_config_fields },
+ {
+ TSM_CON0_INTERFACE, 25, 20, NTHW_FPGA_REG_TYPE_RW, 524291, 5,
+ tsm_con0_interface_fields
+ },
+ { TSM_CON0_SAMPLE_HI, 27, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_hi_fields },
+ { TSM_CON0_SAMPLE_LO, 26, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_lo_fields },
+ { TSM_CON1_CONFIG, 28, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con1_config_fields },
+ { TSM_CON1_SAMPLE_HI, 30, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_hi_fields },
+ { TSM_CON1_SAMPLE_LO, 29, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_lo_fields },
+ { TSM_CON2_CONFIG, 31, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con2_config_fields },
+ { TSM_CON2_SAMPLE_HI, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_hi_fields },
+ { TSM_CON2_SAMPLE_LO, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_lo_fields },
+ { TSM_CON3_CONFIG, 34, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con3_config_fields },
+ { TSM_CON3_SAMPLE_HI, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_hi_fields },
+ { TSM_CON3_SAMPLE_LO, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_lo_fields },
+ { TSM_CON4_CONFIG, 37, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con4_config_fields },
+ { TSM_CON4_SAMPLE_HI, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_hi_fields },
+ { TSM_CON4_SAMPLE_LO, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_lo_fields },
+ { TSM_CON5_CONFIG, 40, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con5_config_fields },
+ { TSM_CON5_SAMPLE_HI, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_hi_fields },
+ { TSM_CON5_SAMPLE_LO, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_lo_fields },
+ { TSM_CON6_CONFIG, 43, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con6_config_fields },
+ { TSM_CON6_SAMPLE_HI, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_hi_fields },
+ { TSM_CON6_SAMPLE_LO, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_lo_fields },
+ {
+ TSM_CON7_HOST_SAMPLE_HI, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_hi_fields
+ },
+ {
+ TSM_CON7_HOST_SAMPLE_LO, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_lo_fields
+ },
+ { TSM_CONFIG, 0, 13, NTHW_FPGA_REG_TYPE_RW, 257, 6, tsm_config_fields },
+ { TSM_INT_CONFIG, 2, 20, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_int_config_fields },
+ { TSM_INT_STAT, 3, 20, NTHW_FPGA_REG_TYPE_MIXED, 0, 2, tsm_int_stat_fields },
+ { TSM_LED, 4, 27, NTHW_FPGA_REG_TYPE_RW, 16793600, 12, tsm_led_fields },
+ { TSM_NTTS_CONFIG, 13, 7, NTHW_FPGA_REG_TYPE_RW, 32, 4, tsm_ntts_config_fields },
+ { TSM_NTTS_EXT_STAT, 15, 32, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, tsm_ntts_ext_stat_fields },
+ { TSM_NTTS_LIMIT_HI, 23, 16, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_limit_hi_fields },
+ { TSM_NTTS_LIMIT_LO, 22, 32, NTHW_FPGA_REG_TYPE_RW, 100000, 1, tsm_ntts_limit_lo_fields },
+ { TSM_NTTS_OFFSET, 21, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_offset_fields },
+ { TSM_NTTS_SAMPLE_HI, 19, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_hi_fields },
+ { TSM_NTTS_SAMPLE_LO, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_lo_fields },
+ { TSM_NTTS_STAT, 14, 17, NTHW_FPGA_REG_TYPE_RO, 0, 3, tsm_ntts_stat_fields },
+ { TSM_NTTS_TS_T0_HI, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_hi_fields },
+ { TSM_NTTS_TS_T0_LO, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_lo_fields },
+ {
+ TSM_NTTS_TS_T0_OFFSET, 20, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ntts_ts_t0_offset_fields
+ },
+ { TSM_PB_CTRL, 63, 2, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_ctrl_fields },
+ { TSM_PB_INSTMEM, 64, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_instmem_fields },
+ { TSM_PI_CTRL_I, 54, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_i_fields },
+ { TSM_PI_CTRL_KI, 52, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_ki_fields },
+ { TSM_PI_CTRL_KP, 51, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_kp_fields },
+ { TSM_PI_CTRL_SHL, 53, 4, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_shl_fields },
+ { TSM_STAT, 1, 16, NTHW_FPGA_REG_TYPE_RO, 0, 9, tsm_stat_fields },
+ { TSM_TIMER_CTRL, 48, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_timer_ctrl_fields },
+ { TSM_TIMER_T0, 49, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t0_fields },
+ { TSM_TIMER_T1, 50, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t1_fields },
+ { TSM_TIME_HARDSET_HI, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_hi_fields },
+ { TSM_TIME_HARDSET_LO, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_lo_fields },
+ { TSM_TIME_HI, 9, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_hi_fields },
+ { TSM_TIME_LO, 8, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_lo_fields },
+ { TSM_TIME_RATE_ADJ, 10, 29, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_rate_adj_fields },
+ { TSM_TS_HI, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_hi_fields },
+ { TSM_TS_LO, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_lo_fields },
+ { TSM_TS_OFFSET, 7, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ts_offset_fields },
+ { TSM_TS_STAT, 55, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, tsm_ts_stat_fields },
+ {
+ TSM_TS_STAT_HI_OFFSET, 62, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_hi_offset_fields
+ },
+ {
+ TSM_TS_STAT_LO_OFFSET, 61, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_lo_offset_fields
+ },
+ { TSM_TS_STAT_TAR_HI, 57, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_hi_fields },
+ { TSM_TS_STAT_TAR_LO, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_lo_fields },
+ { TSM_TS_STAT_X, 58, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x_fields },
+ { TSM_TS_STAT_X2_HI, 60, 16, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_hi_fields },
+ { TSM_TS_STAT_X2_LO, 59, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_lo_fields },
+ { TSM_UTC_OFFSET, 65, 8, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_utc_offset_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2627,6 +3018,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
{ MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
+ { MOD_TSM, 0, MOD_TSM, 0, 8, NTHW_FPGA_BUS_TYPE_RAB2, 1024, 66, tsm_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2785,5 +3177,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 37, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index a2ab266931..e8ed7faf0d 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -20,5 +20,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
{ MOD_STA, "STA" },
+ { MOD_TSM, "TSM" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
index a087850aa4..cdb733ee17 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -7,8 +7,158 @@
#define _NTHW_FPGA_REG_DEFS_TSM_
/* TSM */
+#define TSM_CON0_CONFIG (0xf893d371UL)
+#define TSM_CON0_CONFIG_BLIND (0x59ccfcbUL)
+#define TSM_CON0_CONFIG_DC_SRC (0x1879812bUL)
+#define TSM_CON0_CONFIG_PORT (0x3ff0bb08UL)
+#define TSM_CON0_CONFIG_PPSIN_2_5V (0xb8e78227UL)
+#define TSM_CON0_CONFIG_SAMPLE_EDGE (0x4a4022ebUL)
+#define TSM_CON0_INTERFACE (0x76e93b59UL)
+#define TSM_CON0_INTERFACE_EX_TERM (0xd079b416UL)
+#define TSM_CON0_INTERFACE_IN_REF_PWM (0x16f73c33UL)
+#define TSM_CON0_INTERFACE_PWM_ENA (0x3629e73fUL)
+#define TSM_CON0_INTERFACE_RESERVED (0xf9c5066UL)
+#define TSM_CON0_INTERFACE_VTERM_PWM (0x6d2b1e23UL)
+#define TSM_CON0_SAMPLE_HI (0x6e536b8UL)
+#define TSM_CON0_SAMPLE_HI_SEC (0x5fc26159UL)
+#define TSM_CON0_SAMPLE_LO (0x8bea5689UL)
+#define TSM_CON0_SAMPLE_LO_NS (0x13d0010dUL)
+#define TSM_CON1_CONFIG (0x3439d3efUL)
+#define TSM_CON1_CONFIG_BLIND (0x98932ebdUL)
+#define TSM_CON1_CONFIG_DC_SRC (0xa1825ac3UL)
+#define TSM_CON1_CONFIG_PORT (0xe266628dUL)
+#define TSM_CON1_CONFIG_PPSIN_2_5V (0x6f05027fUL)
+#define TSM_CON1_CONFIG_SAMPLE_EDGE (0x2f2719adUL)
+#define TSM_CON1_SAMPLE_HI (0xc76be978UL)
+#define TSM_CON1_SAMPLE_HI_SEC (0xe639bab1UL)
+#define TSM_CON1_SAMPLE_LO (0x4a648949UL)
+#define TSM_CON1_SAMPLE_LO_NS (0x8edfe07bUL)
+#define TSM_CON2_CONFIG (0xbab6d40cUL)
+#define TSM_CON2_CONFIG_BLIND (0xe4f20b66UL)
+#define TSM_CON2_CONFIG_DC_SRC (0xb0ff30baUL)
+#define TSM_CON2_CONFIG_PORT (0x5fac0e43UL)
+#define TSM_CON2_CONFIG_PPSIN_2_5V (0xcc5384d6UL)
+#define TSM_CON2_CONFIG_SAMPLE_EDGE (0x808e5467UL)
+#define TSM_CON2_SAMPLE_HI (0x5e898f79UL)
+#define TSM_CON2_SAMPLE_HI_SEC (0xf744d0c8UL)
+#define TSM_CON2_SAMPLE_LO (0xd386ef48UL)
+#define TSM_CON2_SAMPLE_LO_NS (0xf2bec5a0UL)
+#define TSM_CON3_CONFIG (0x761cd492UL)
+#define TSM_CON3_CONFIG_BLIND (0x79fdea10UL)
+#define TSM_CON3_CONFIG_PORT (0x823ad7c6UL)
+#define TSM_CON3_CONFIG_SAMPLE_EDGE (0xe5e96f21UL)
+#define TSM_CON3_SAMPLE_HI (0x9f0750b9UL)
+#define TSM_CON3_SAMPLE_HI_SEC (0x4ebf0b20UL)
+#define TSM_CON3_SAMPLE_LO (0x12083088UL)
+#define TSM_CON3_SAMPLE_LO_NS (0x6fb124d6UL)
+#define TSM_CON4_CONFIG (0x7cd9dd8bUL)
+#define TSM_CON4_CONFIG_BLIND (0x1c3040d0UL)
+#define TSM_CON4_CONFIG_PORT (0xff49d19eUL)
+#define TSM_CON4_CONFIG_SAMPLE_EDGE (0x4adc9b2UL)
+#define TSM_CON4_SAMPLE_HI (0xb63c453aUL)
+#define TSM_CON4_SAMPLE_HI_SEC (0xd5be043aUL)
+#define TSM_CON4_SAMPLE_LO (0x3b33250bUL)
+#define TSM_CON4_SAMPLE_LO_NS (0xa7c8e16UL)
+#define TSM_CON5_CONFIG (0xb073dd15UL)
+#define TSM_CON5_CONFIG_BLIND (0x813fa1a6UL)
+#define TSM_CON5_CONFIG_PORT (0x22df081bUL)
+#define TSM_CON5_CONFIG_SAMPLE_EDGE (0x61caf2f4UL)
+#define TSM_CON5_SAMPLE_HI (0x77b29afaUL)
+#define TSM_CON5_SAMPLE_HI_SEC (0x6c45dfd2UL)
+#define TSM_CON5_SAMPLE_LO (0xfabdfacbUL)
+#define TSM_CON5_SAMPLE_LO_TIME (0x945d87e8UL)
+#define TSM_CON6_CONFIG (0x3efcdaf6UL)
+#define TSM_CON6_CONFIG_BLIND (0xfd5e847dUL)
+#define TSM_CON6_CONFIG_PORT (0x9f1564d5UL)
+#define TSM_CON6_CONFIG_SAMPLE_EDGE (0xce63bf3eUL)
+#define TSM_CON6_SAMPLE_HI (0xee50fcfbUL)
+#define TSM_CON6_SAMPLE_HI_SEC (0x7d38b5abUL)
+#define TSM_CON6_SAMPLE_LO (0x635f9ccaUL)
+#define TSM_CON6_SAMPLE_LO_NS (0xeb124abbUL)
+#define TSM_CON7_HOST_SAMPLE_HI (0xdcd90e52UL)
+#define TSM_CON7_HOST_SAMPLE_HI_SEC (0xd98d3618UL)
+#define TSM_CON7_HOST_SAMPLE_LO (0x51d66e63UL)
+#define TSM_CON7_HOST_SAMPLE_LO_NS (0x8f5594ddUL)
#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_NTTS_SRC (0x1b60227bUL)
+#define TSM_CONFIG_NTTS_SYNC (0x43e0a69dUL)
+#define TSM_CONFIG_TIMESET_EDGE (0x8c381127UL)
+#define TSM_CONFIG_TIMESET_SRC (0xe7590a31UL)
+#define TSM_CONFIG_TIMESET_UP (0x561980c1UL)
#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_INT_CONFIG (0x9a0d52dUL)
+#define TSM_INT_CONFIG_AUTO_DISABLE (0x9581470UL)
+#define TSM_INT_CONFIG_MASK (0xf00cd3d7UL)
+#define TSM_INT_STAT (0xa4611a70UL)
+#define TSM_INT_STAT_CAUSE (0x315168cfUL)
+#define TSM_INT_STAT_ENABLE (0x980a12d1UL)
+#define TSM_LED (0x6ae05f87UL)
+#define TSM_LED_LED0_BG_COLOR (0x897cf9eeUL)
+#define TSM_LED_LED0_COLOR (0x6d7ada39UL)
+#define TSM_LED_LED0_MODE (0x6087b644UL)
+#define TSM_LED_LED0_SRC (0x4fe29639UL)
+#define TSM_LED_LED1_BG_COLOR (0x66be92d0UL)
+#define TSM_LED_LED1_COLOR (0xcb0dd18dUL)
+#define TSM_LED_LED1_MODE (0xabdb65e1UL)
+#define TSM_LED_LED1_SRC (0x7282bf89UL)
+#define TSM_LED_LED2_BG_COLOR (0x8d8929d3UL)
+#define TSM_LED_LED2_COLOR (0xfae5cb10UL)
+#define TSM_LED_LED2_MODE (0x2d4f174fUL)
+#define TSM_LED_LED2_SRC (0x3522c559UL)
+#define TSM_NTTS_CONFIG (0x8bc38bdeUL)
+#define TSM_NTTS_CONFIG_AUTO_HARDSET (0xd75be25dUL)
+#define TSM_NTTS_CONFIG_EXT_CLK_ADJ (0x700425b6UL)
+#define TSM_NTTS_CONFIG_HIGH_SAMPLE (0x37135b7eUL)
+#define TSM_NTTS_CONFIG_TS_SRC_FORMAT (0x6e6e707UL)
+#define TSM_NTTS_EXT_STAT (0x2b0315b7UL)
+#define TSM_NTTS_EXT_STAT_MASTER_ID (0xf263315eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_REV (0xd543795eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_STAT (0x92d96f5eUL)
+#define TSM_NTTS_LIMIT_HI (0x1ddaa85fUL)
+#define TSM_NTTS_LIMIT_HI_SEC (0x315c6ef2UL)
+#define TSM_NTTS_LIMIT_LO (0x90d5c86eUL)
+#define TSM_NTTS_LIMIT_LO_NS (0xe6d94d9aUL)
+#define TSM_NTTS_OFFSET (0x6436e72UL)
+#define TSM_NTTS_OFFSET_NS (0x12d43a06UL)
+#define TSM_NTTS_SAMPLE_HI (0xcdc8aa3eUL)
+#define TSM_NTTS_SAMPLE_HI_SEC (0x4f6588fdUL)
+#define TSM_NTTS_SAMPLE_LO (0x40c7ca0fUL)
+#define TSM_NTTS_SAMPLE_LO_NS (0x6e43ff97UL)
+#define TSM_NTTS_STAT (0x6502b820UL)
+#define TSM_NTTS_STAT_NTTS_VALID (0x3e184471UL)
+#define TSM_NTTS_STAT_SIGNAL_LOST (0x178bedfdUL)
+#define TSM_NTTS_STAT_SYNC_LOST (0xe4cd53dfUL)
+#define TSM_NTTS_TS_T0_HI (0x1300d1b6UL)
+#define TSM_NTTS_TS_T0_HI_TIME (0xa016ae4fUL)
+#define TSM_NTTS_TS_T0_LO (0x9e0fb187UL)
+#define TSM_NTTS_TS_T0_LO_TIME (0x82006941UL)
+#define TSM_NTTS_TS_T0_OFFSET (0xbf70ce4fUL)
+#define TSM_NTTS_TS_T0_OFFSET_COUNT (0x35dd4398UL)
+#define TSM_PB_CTRL (0x7a8b60faUL)
+#define TSM_PB_CTRL_INSTMEM_WR (0xf96e2cbcUL)
+#define TSM_PB_CTRL_RESET (0xa38ade8bUL)
+#define TSM_PB_CTRL_RST (0x3aaa82f4UL)
+#define TSM_PB_INSTMEM (0xb54aeecUL)
+#define TSM_PB_INSTMEM_MEM_ADDR (0x9ac79b6eUL)
+#define TSM_PB_INSTMEM_MEM_DATA (0x65aefa38UL)
+#define TSM_PI_CTRL_I (0x8d71a4e2UL)
+#define TSM_PI_CTRL_I_VAL (0x98baedc9UL)
+#define TSM_PI_CTRL_KI (0xa1bd86cbUL)
+#define TSM_PI_CTRL_KI_GAIN (0x53faa916UL)
+#define TSM_PI_CTRL_KP (0xc5d62e0bUL)
+#define TSM_PI_CTRL_KP_GAIN (0x7723fa45UL)
+#define TSM_PI_CTRL_SHL (0xaa518701UL)
+#define TSM_PI_CTRL_SHL_VAL (0x56f56a6fUL)
+#define TSM_STAT (0xa55bf677UL)
+#define TSM_STAT_HARD_SYNC (0x7fff20fdUL)
+#define TSM_STAT_LINK_CON0 (0x216086f0UL)
+#define TSM_STAT_LINK_CON1 (0x5667b666UL)
+#define TSM_STAT_LINK_CON2 (0xcf6ee7dcUL)
+#define TSM_STAT_LINK_CON3 (0xb869d74aUL)
+#define TSM_STAT_LINK_CON4 (0x260d42e9UL)
+#define TSM_STAT_LINK_CON5 (0x510a727fUL)
+#define TSM_STAT_NTTS_INSYNC (0xb593a245UL)
+#define TSM_STAT_PTP_MI_PRESENT (0x43131eb0UL)
#define TSM_TIMER_CTRL (0x648da051UL)
#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
@@ -16,13 +166,40 @@
#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
#define TSM_TIMER_T1 (0x36752733UL)
#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HARDSET_HI (0xf28bdb46UL)
+#define TSM_TIME_HARDSET_HI_TIME (0x2d9a28baUL)
+#define TSM_TIME_HARDSET_LO (0x7f84bb77UL)
+#define TSM_TIME_HARDSET_LO_TIME (0xf8cefb4UL)
#define TSM_TIME_HI (0x175acea1UL)
#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
#define TSM_TIME_LO (0x9a55ae90UL)
#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TIME_RATE_ADJ (0xb1cc4bb1UL)
+#define TSM_TIME_RATE_ADJ_FRACTION (0xb7ab96UL)
#define TSM_TS_HI (0xccfe9e5eUL)
#define TSM_TS_HI_TIME (0xc23fed30UL)
#define TSM_TS_LO (0x41f1fe6fUL)
#define TSM_TS_LO_TIME (0xe0292a3eUL)
+#define TSM_TS_OFFSET (0x4b2e6e13UL)
+#define TSM_TS_OFFSET_NS (0x68c286b9UL)
+#define TSM_TS_STAT (0x64d41b8cUL)
+#define TSM_TS_STAT_OVERRUN (0xad9db92aUL)
+#define TSM_TS_STAT_SAMPLES (0xb6350e0bUL)
+#define TSM_TS_STAT_HI_OFFSET (0x1aa2ddf2UL)
+#define TSM_TS_STAT_HI_OFFSET_NS (0xeb040e0fUL)
+#define TSM_TS_STAT_LO_OFFSET (0x81218579UL)
+#define TSM_TS_STAT_LO_OFFSET_NS (0xb7ff33UL)
+#define TSM_TS_STAT_TAR_HI (0x65af24b6UL)
+#define TSM_TS_STAT_TAR_HI_SEC (0x7e92f619UL)
+#define TSM_TS_STAT_TAR_LO (0xe8a04487UL)
+#define TSM_TS_STAT_TAR_LO_NS (0xf7b3f439UL)
+#define TSM_TS_STAT_X (0x419f0ddUL)
+#define TSM_TS_STAT_X_NS (0xa48c3f27UL)
+#define TSM_TS_STAT_X2_HI (0xd6b1c517UL)
+#define TSM_TS_STAT_X2_HI_NS (0x4288c50fUL)
+#define TSM_TS_STAT_X2_LO (0x5bbea526UL)
+#define TSM_TS_STAT_X2_LO_NS (0x92633c13UL)
+#define TSM_UTC_OFFSET (0xf622a13aUL)
+#define TSM_UTC_OFFSET_SEC (0xd9c80209UL)
#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 61/73] net/ntnic: add xstats
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (59 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 60/73] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 62/73] net/ntnic: added flow statistics Serhii Iliushyk
` (12 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Extended statistics implementation and
initialization were added.
eth_dev_ops api was extended with new xstats apis.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 36 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 112 +++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +
drivers/net/ntnic/ntnic_mod_reg.h | 28 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 ++++++++++++++++++
7 files changed, 1022 insertions(+)
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 64351bcdc7..947c7ba3a1 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -13,6 +13,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
Basic stats = Y
+Extended stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 0735dbc085..4d4affa3cf 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -169,6 +169,39 @@ struct port_counters_v2 {
};
struct flm_counters_v1 {
+ /* FLM 0.17 */
+ uint64_t current;
+ uint64_t learn_done;
+ uint64_t learn_ignore;
+ uint64_t learn_fail;
+ uint64_t unlearn_done;
+ uint64_t unlearn_ignore;
+ uint64_t auto_unlearn_done;
+ uint64_t auto_unlearn_ignore;
+ uint64_t auto_unlearn_fail;
+ uint64_t timeout_unlearn_done;
+ uint64_t rel_done;
+ uint64_t rel_ignore;
+ /* FLM 0.20 */
+ uint64_t prb_done;
+ uint64_t prb_ignore;
+ uint64_t sta_done;
+ uint64_t inf_done;
+ uint64_t inf_skip;
+ uint64_t pck_hit;
+ uint64_t pck_miss;
+ uint64_t pck_unh;
+ uint64_t pck_dis;
+ uint64_t csh_hit;
+ uint64_t csh_miss;
+ uint64_t csh_unh;
+ uint64_t cuc_start;
+ uint64_t cuc_move;
+ /* FLM 0.17 Load */
+ uint64_t load_lps;
+ uint64_t load_aps;
+ uint64_t max_lps;
+ uint64_t max_aps;
};
struct nt4ga_stat_s {
@@ -200,6 +233,9 @@ struct nt4ga_stat_s {
struct host_buffer_counters *mp_stat_structs_hb;
struct port_load_counters *mp_port_load;
+ int flm_stat_ver;
+ struct flm_counters_v1 *mp_stat_structs_flm;
+
/* Rx/Tx totals: */
uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index a6c4fec0be..e59ac5bdb3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -31,6 +31,7 @@ sources = files(
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
'ntnic_filter/ntnic_filter.c',
+ 'ntnic_xstats/ntnic_xstats.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f94340f489..f6a74c7df2 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1496,6 +1496,113 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
return 0;
}
+static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats =
+ ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+
+ struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return dpdk_stats_reset(internals, p_nt_drv, if_index);
+}
+
+static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names(p_nt4ga_stat, xstats_names, size);
+}
+
+static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names_by_id(p_nt4ga_stat, xstats_names, ids,
+ size);
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1594,6 +1701,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
+ .xstats_get = eth_xstats_get,
+ .xstats_get_names = eth_xstats_get_names,
+ .xstats_reset = eth_xstats_reset,
+ .xstats_get_by_id = eth_xstats_get_by_id,
+ .xstats_get_names_by_id = eth_xstats_get_names_by_id,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 355e2032b1..6737d18a6f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -192,3 +192,18 @@ const struct rte_flow_ops *get_dev_flow_ops(void)
return dev_flow_ops;
}
+
+static struct ntnic_xstats_ops *ntnic_xstats_ops;
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops)
+{
+ ntnic_xstats_ops = ops;
+}
+
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void)
+{
+ if (ntnic_xstats_ops == NULL)
+ ntnic_xstats_ops_init();
+
+ return ntnic_xstats_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8703d478b6..65e7972c68 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,10 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_flow_driver.h"
+
#include "flow_api.h"
#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
@@ -354,4 +358,28 @@ void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
+struct ntnic_xstats_ops {
+ int (*nthw_xstats_get_names)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
+ int (*nthw_xstats_get)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port);
+ void (*nthw_xstats_reset)(nt4ga_stat_t *p_nt4ga_stat, uint8_t port);
+ int (*nthw_xstats_get_names_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size);
+ int (*nthw_xstats_get_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port);
+};
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops);
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void);
+void ntnic_xstats_ops_init(void);
+
#endif /* __NTNIC_MOD_REG_H__ */
diff --git a/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
new file mode 100644
index 0000000000..7604afe6a0
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
@@ -0,0 +1,829 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_ethdev.h>
+
+#include "include/ntdrv_4ga.h"
+#include "ntlog.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "stream_binary_flow_api.h"
+#include "ntnic_mod_reg.h"
+
+struct rte_nthw_xstats_names_s {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint8_t source;
+ unsigned int offset;
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.17
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v1[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.18
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v2[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * STA 0.9
+ */
+
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v3[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) },
+
+ /* FLM 0.17 */
+ { "flm_count_load_lps", 3, offsetof(struct flm_counters_v1, load_lps) },
+ { "flm_count_load_aps", 3, offsetof(struct flm_counters_v1, load_aps) },
+ { "flm_count_max_lps", 3, offsetof(struct flm_counters_v1, max_lps) },
+ { "flm_count_max_aps", 3, offsetof(struct flm_counters_v1, max_aps) },
+
+ { "rx_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps) },
+ { "rx_max_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps_max) },
+ { "rx_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps) },
+ { "rx_max_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps_max) },
+ { "tx_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps) },
+ { "tx_max_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps_max) },
+ { "tx_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps) },
+ { "tx_max_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps_max) }
+};
+
+#define NTHW_CAP_XSTATS_NAMES_V1 RTE_DIM(nthw_cap_xstats_names_v1)
+#define NTHW_CAP_XSTATS_NAMES_V2 RTE_DIM(nthw_cap_xstats_names_v2)
+#define NTHW_CAP_XSTATS_NAMES_V3 RTE_DIM(nthw_cap_xstats_names_v3)
+
+/*
+ * Container for the reset values
+ */
+#define NTHW_XSTATS_SIZE NTHW_CAP_XSTATS_NAMES_V3
+
+static uint64_t nthw_xstats_reset_val[NUM_ADAPTER_PORTS_MAX][NTHW_XSTATS_SIZE] = { 0 };
+
+/*
+ * These functions must only be called with stat mutex locked
+ */
+static int nthw_xstats_get(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n && i < nb_names; i++) {
+ stats[i].id = i;
+
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ stats[i].value = *((uint64_t *)&rx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 2:
+ /* TX stat */
+ stats[i].value = *((uint64_t *)&tx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ stats[i].value = *((uint64_t *)&flm_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[0][i];
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ stats[i].value = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ default:
+ stats[i].value = 0;
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int nthw_xstats_get_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+ int count = 0;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] < nb_names) {
+ switch (names[ids[i]].source) {
+ case 1:
+ /* RX stat */
+ values[i] = *((uint64_t *)&rx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 2:
+ /* TX stat */
+ values[i] = *((uint64_t *)&tx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ values[i] = *((uint64_t *)&flm_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[0][ids[i]];
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ values[i] = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ default:
+ values[i] = 0;
+ break;
+ }
+
+ count++;
+ }
+ }
+
+ return count;
+}
+
+static void nthw_xstats_reset(nt4ga_stat_t *p_nt4ga_stat, uint8_t port)
+{
+ unsigned int i;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < nb_names; i++) {
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&rx_ptr[names[i].offset]);
+ break;
+
+ case 2:
+ /* TX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&tx_ptr[names[i].offset]);
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ /* Reset makes no sense for flm_count_current */
+ /* Reset can't be used for load_lps, load_aps, max_lps and max_aps */
+ if (flm_ptr &&
+ (strcmp(names[i].name, "flm_count_current") != 0 &&
+ strcmp(names[i].name, "flm_count_load_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_load_aps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_aps") != 0)) {
+ nthw_xstats_reset_val[0][i] =
+ *((uint64_t *)&flm_ptr[names[i].offset]);
+ }
+
+ break;
+
+ case 4:
+ /* Port load stat*/
+ /* No reset */
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/*
+ * These functions does not require stat mutex locked
+ */
+static int nthw_xstats_get_names(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size && i < nb_names; i++) {
+ strlcpy(xstats_names[i].name, names[i].name, sizeof(xstats_names[i].name));
+ count++;
+ }
+
+ return count;
+}
+
+static int nthw_xstats_get_names_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] < nb_names) {
+ strlcpy(xstats_names[i].name,
+ names[ids[i]].name,
+ RTE_ETH_XSTATS_NAME_SIZE);
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+static struct ntnic_xstats_ops ops = {
+ .nthw_xstats_get_names = nthw_xstats_get_names,
+ .nthw_xstats_get = nthw_xstats_get,
+ .nthw_xstats_reset = nthw_xstats_reset,
+ .nthw_xstats_get_names_by_id = nthw_xstats_get_names_by_id,
+ .nthw_xstats_get_by_id = nthw_xstats_get_by_id
+};
+
+void ntnic_xstats_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "xstats module was initialized");
+ register_ntnic_xstats_ops(&ops);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 62/73] net/ntnic: added flow statistics
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (60 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 61/73] net/ntnic: add xstats Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 63/73] net/ntnic: add scrub registers Serhii Iliushyk
` (11 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
xstats was extended with flow statistics support.
Additional counters that shows learn, unlearn, lps, aps
and other.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 40 ++++
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +-
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 142 ++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.c | 176 ++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 52 ++++++
.../profile_inline/flow_api_profile_inline.c | 46 +++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +++++
drivers/net/ntnic/ntnic_ethdev.c | 132 +++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +
13 files changed, 656 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 3afc5b7853..8fedfdcd04 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -189,6 +189,24 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return -1;
}
+ if (get_flow_filter_ops() != NULL) {
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+ p_nt4ga_stat->flm_stat_ver = ndev->be.flm.ver;
+ p_nt4ga_stat->mp_stat_structs_flm = calloc(1, sizeof(struct flm_counters_v1));
+
+ if (!p_nt4ga_stat->mp_stat_structs_flm) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_APS_MAX, 0);
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_LPS_MAX, 0);
+ }
+
p_nt4ga_stat->mp_port_load =
calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
@@ -236,6 +254,7 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
return -1;
nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
@@ -542,6 +561,27 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
(uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
}
+ /* Update and get FLM stats */
+ flow_filter_ops->flow_get_flm_stats(ndev, (uint64_t *)p_nt4ga_stat->mp_stat_structs_flm,
+ sizeof(struct flm_counters_v1) / sizeof(uint64_t));
+
+ /*
+ * Calculate correct load values:
+ * rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ * bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) - 1ULL);
+ * load_aps = ((uint64_t)load_aps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ * load_lps = ((uint64_t)load_lps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ *
+ * Simplified it gives:
+ *
+ * load_lps = (load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ * load_aps = (load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ */
+
+ p_nt4ga_stat->mp_stat_structs_flm->load_aps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
+ p_nt4ga_stat->mp_stat_structs_flm->load_lps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
return 0;
}
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 17d5755634..9cd9d92823 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 38e4d0ca35..677aa7b6c8 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -17,6 +17,7 @@ typedef struct ntdrv_4ga_s {
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
rte_thread_t stat_thread;
+ rte_thread_t port_event_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e59ac5bdb3..c0b7729929 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -59,6 +59,7 @@ sources = files(
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
+ 'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index e953fc1a12..efe9a1a3b9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1050,11 +1050,14 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
- (void)ndev;
- (void)data;
- (void)size;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ return -1;
+
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE)
+ return profile_inline_ops->flow_get_flm_stats_profile_inline(ndev, data, size);
- NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
return -1;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f4c29b8bde..1845f74166 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,148 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_stat_update(be->be_dev, &be->flm);
+}
+
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STAT_LRN_DONE:
+ *value = be->flm.v25.lrn_done->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_IGNORE:
+ *value = be->flm.v25.lrn_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_FAIL:
+ *value = be->flm.v25.lrn_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_DONE:
+ *value = be->flm.v25.unl_done->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_IGNORE:
+ *value = be->flm.v25.unl_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_DONE:
+ *value = be->flm.v25.rel_done->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_IGNORE:
+ *value = be->flm.v25.rel_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_DONE:
+ *value = be->flm.v25.prb_done->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_IGNORE:
+ *value = be->flm.v25.prb_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_DONE:
+ *value = be->flm.v25.aul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_IGNORE:
+ *value = be->flm.v25.aul_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_FAIL:
+ *value = be->flm.v25.aul_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_TUL_DONE:
+ *value = be->flm.v25.tul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_FLOWS:
+ *value = be->flm.v25.flows->cnt;
+ break;
+
+ case HW_FLM_LOAD_LPS:
+ *value = be->flm.v25.load_lps->lps;
+ break;
+
+ case HW_FLM_LOAD_APS:
+ *value = be->flm.v25.load_aps->aps;
+ break;
+
+ default: {
+ if (_VER_ < 18)
+ return UNSUP_FIELD;
+
+ switch (field) {
+ case HW_FLM_STAT_STA_DONE:
+ *value = be->flm.v25.sta_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_DONE:
+ *value = be->flm.v25.inf_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_SKIP:
+ *value = be->flm.v25.inf_skip->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_HIT:
+ *value = be->flm.v25.pck_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_MISS:
+ *value = be->flm.v25.pck_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_UNH:
+ *value = be->flm.v25.pck_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_DIS:
+ *value = be->flm.v25.pck_dis->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_HIT:
+ *value = be->flm.v25.csh_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_MISS:
+ *value = be->flm.v25.csh_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_UNH:
+ *value = be->flm.v25.csh_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_START:
+ *value = be->flm.v25.cuc_start->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_MOVE:
+ *value = be->flm.v25.cuc_move->cnt;
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+ }
+ break;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
new file mode 100644
index 0000000000..98b0e8347a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -0,0 +1,176 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+#include <rte_errno.h>
+
+#include "ntlog.h"
+#include "flm_evt_queue.h"
+
+/* Local queues for flm statistic events */
+static struct rte_ring *info_q_local[MAX_INFO_LCL_QUEUES];
+
+/* Remote queues for flm statistic events */
+static struct rte_ring *info_q_remote[MAX_INFO_RMT_QUEUES];
+
+/* Local queues for flm status records */
+static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
+
+/* Remote queues for flm status records */
+static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+
+
+static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
+{
+ static_assert((FLM_EVT_ELEM_SIZE & ~(size_t)3) == FLM_EVT_ELEM_SIZE,
+ "FLM EVENT struct size");
+ static_assert((FLM_STAT_ELEM_SIZE & ~(size_t)3) == FLM_STAT_ELEM_SIZE,
+ "FLM STAT struct size");
+ char name[20] = "NONE";
+ struct rte_ring *q;
+ uint32_t elem_size = 0;
+ uint32_t queue_size = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port >= MAX_INFO_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_INFO_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port >= MAX_INFO_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_INFO_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port >= MAX_STAT_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_STAT_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port >= MAX_STAT_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_STAT_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue create illegal caller: %u", caller);
+ return NULL;
+ }
+
+ q = rte_ring_create_elem(name,
+ elem_size,
+ queue_size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN, FILTER, "FLM queues cannot be created due to error %02X", rte_errno);
+ return NULL;
+ }
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ info_q_local[port] = q;
+ break;
+
+ case FLM_INFO_REMOTE:
+ info_q_remote[port] = q;
+ break;
+
+ case FLM_STAT_LOCAL:
+ stat_q_local[port] = q;
+ break;
+
+ case FLM_STAT_REMOTE:
+ stat_q_remote[port] = q;
+ break;
+
+ default:
+ break;
+ }
+
+ return q;
+}
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES) {
+ if (info_q_local[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_local[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_LOCAL) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES) {
+ if (info_q_remote[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_remote[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_REMOTE) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ return -ENOENT;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
new file mode 100644
index 0000000000..238be7a3b2
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_EVT_QUEUE_H_
+#define _FLM_EVT_QUEUE_H_
+
+#include "stdint.h"
+#include "stdbool.h"
+
+struct flm_status_event_s {
+ void *flow;
+ uint32_t learn_ignore : 1;
+ uint32_t learn_failed : 1;
+ uint32_t learn_done : 1;
+};
+
+struct flm_info_event_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum {
+ FLM_INFO_LOCAL,
+ FLM_INFO_REMOTE,
+ FLM_STAT_LOCAL,
+ FLM_STAT_REMOTE,
+};
+
+/* Max number of local queues */
+#define MAX_INFO_LCL_QUEUES 8
+#define MAX_STAT_LCL_QUEUES 8
+
+/* Max number of remote queues */
+#define MAX_INFO_RMT_QUEUES 128
+#define MAX_STAT_RMT_QUEUES 128
+
+/* queue size */
+#define FLM_EVT_QUEUE_SIZE 8192
+#define FLM_STAT_QUEUE_SIZE 8192
+
+/* Event element size */
+#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
+#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+
+#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index bbf450697c..a1cba7f4c7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4467,6 +4467,48 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
return 0;
}
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ const enum hw_flm_e fields[] = {
+ HW_FLM_STAT_FLOWS, HW_FLM_STAT_LRN_DONE, HW_FLM_STAT_LRN_IGNORE,
+ HW_FLM_STAT_LRN_FAIL, HW_FLM_STAT_UNL_DONE, HW_FLM_STAT_UNL_IGNORE,
+ HW_FLM_STAT_AUL_DONE, HW_FLM_STAT_AUL_IGNORE, HW_FLM_STAT_AUL_FAIL,
+ HW_FLM_STAT_TUL_DONE, HW_FLM_STAT_REL_DONE, HW_FLM_STAT_REL_IGNORE,
+ HW_FLM_STAT_PRB_DONE, HW_FLM_STAT_PRB_IGNORE,
+
+ HW_FLM_STAT_STA_DONE, HW_FLM_STAT_INF_DONE, HW_FLM_STAT_INF_SKIP,
+ HW_FLM_STAT_PCK_HIT, HW_FLM_STAT_PCK_MISS, HW_FLM_STAT_PCK_UNH,
+ HW_FLM_STAT_PCK_DIS, HW_FLM_STAT_CSH_HIT, HW_FLM_STAT_CSH_MISS,
+ HW_FLM_STAT_CSH_UNH, HW_FLM_STAT_CUC_START, HW_FLM_STAT_CUC_MOVE,
+
+ HW_FLM_LOAD_LPS, HW_FLM_LOAD_APS,
+ };
+
+ const uint64_t fields_cnt = sizeof(fields) / sizeof(enum hw_flm_e);
+
+ if (!ndev->flow_mgnt_prepared)
+ return 0;
+
+ if (size < fields_cnt)
+ return -1;
+
+ hw_mod_flm_stat_update(&ndev->be);
+
+ for (uint64_t i = 0; i < fields_cnt; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_stat_get(&ndev->be, fields[i], &value);
+ data[i] = (fields[i] == HW_FLM_STAT_FLOWS || fields[i] == HW_FLM_LOAD_LPS ||
+ fields[i] == HW_FLM_LOAD_APS)
+ ? value
+ : data[i] + value;
+
+ if (ndev->be.flm.ver < 18 && fields[i] == HW_FLM_STAT_PRB_IGNORE)
+ break;
+ }
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4483,6 +4525,10 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * Stats
+ */
+ .flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index c695842077..b44d3a7291 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -52,4 +52,10 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+/*
+ * Stats
+ */
+
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/rte_pmd_ntnic.h b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
new file mode 100644
index 0000000000..4a1ba18a5e
--- /dev/null
+++ b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
@@ -0,0 +1,43 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTNIC_EVENT_H_
+#define NTNIC_EVENT_H_
+
+#include <rte_ethdev.h>
+
+typedef struct ntnic_flm_load_s {
+ uint64_t lookup;
+ uint64_t lookup_maximum;
+ uint64_t access;
+ uint64_t access_maximum;
+} ntnic_flm_load_t;
+
+typedef struct ntnic_port_load_s {
+ uint64_t rx_pps;
+ uint64_t rx_pps_maximum;
+ uint64_t tx_pps;
+ uint64_t tx_pps_maximum;
+ uint64_t rx_bps;
+ uint64_t rx_bps_maximum;
+ uint64_t tx_bps;
+ uint64_t tx_bps_maximum;
+} ntnic_port_load_t;
+
+struct ntnic_flm_statistic_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum rte_ntnic_event_type {
+ RTE_NTNIC_FLM_LOAD_EVENT = RTE_ETH_EVENT_MAX,
+ RTE_NTNIC_PORT_LOAD_EVENT,
+ RTE_NTNIC_FLM_STATS_EVENT,
+};
+
+#endif /* NTNIC_EVENT_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f6a74c7df2..9c286a4f35 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,8 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_evt_queue.h"
+#include "rte_pmd_ntnic.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
@@ -1419,6 +1421,7 @@ drv_deinit(struct drv_s *p_drv)
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
}
/* stop adapter */
@@ -1711,6 +1714,123 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.rss_hash_conf_get = rss_hash_conf_get,
};
+/*
+ * Port event thread
+ */
+THREAD_FUNC port_event_thread_fn(void *context)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[internals->port_id];
+ uint8_t port_no = internals->port;
+
+ ntnic_flm_load_t flmdata;
+ ntnic_port_load_t portdata;
+
+ memset(&flmdata, 0, sizeof(flmdata));
+ memset(&portdata, 0, sizeof(portdata));
+
+ while (ndev != NULL && ndev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ /*
+ * FLM load measurement
+ * Do only send event, if there has been a change
+ */
+ if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
+ if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
+ flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
+ flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
+ flmdata.lookup_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps;
+ flmdata.access_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_FLM_LOAD_EVENT,
+ &flmdata);
+ }
+ }
+ }
+
+ /*
+ * Port load measurement
+ * Do only send event, if there has been a change.
+ */
+ if (p_nt4ga_stat->mp_port_load) {
+ if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
+ portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
+ portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
+ portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
+ portdata.tx_pps = p_nt4ga_stat->mp_port_load[port_no].tx_pps;
+ portdata.rx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_pps_max;
+ portdata.tx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_pps_max;
+ portdata.rx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
+ portdata.tx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_PORT_LOAD_EVENT,
+ &portdata);
+ }
+ }
+ }
+
+ /* Process events */
+ {
+ int count = 0;
+ bool do_wait = true;
+
+ while (count < 5000) {
+ /* Local FLM statistic events */
+ struct flm_info_event_s data;
+
+ if (flm_inf_queue_get(port_no, FLM_INFO_LOCAL, &data) == 0) {
+ if (eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ struct ntnic_flm_statistic_s event_data;
+ event_data.bytes = data.bytes;
+ event_data.packets = data.packets;
+ event_data.cause = data.cause;
+ event_data.id = data.id;
+ event_data.timestamp = data.timestamp;
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)
+ RTE_NTNIC_FLM_STATS_EVENT,
+ &event_data);
+ do_wait = false;
+ }
+ }
+
+ if (do_wait)
+ nt_os_wait_usec(10);
+
+ count++;
+ do_wait = true;
+ }
+ }
+ }
+
+ return THREAD_RETURN;
+}
+
/*
* Adapter flm stat thread
*/
@@ -2237,6 +2357,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+
+ /* Port event thread */
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
+ port_event_thread_fn, (void *)internals);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
}
return 0;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 65e7972c68..7325bd1ea8 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -290,6 +290,13 @@ struct profile_inline_ops {
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+ /*
+ * Stats
+ */
+ int (*flow_get_flm_stats_profile_inline)(struct flow_nic_dev *ndev,
+ uint64_t *data,
+ uint64_t size);
+
/*
* NT Flow FLM queue API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 63/73] net/ntnic: add scrub registers
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (61 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 62/73] net/ntnic: added flow statistics Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 64/73] net/ntnic: update documentation Serhii Iliushyk
` (10 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Scrub fields were added to the fpga map file
Remove duplicated macro
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 17 ++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 3 ---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 620968ceb6..f1033ca949 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -728,7 +728,7 @@ static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
{ FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
{ FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
{ FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
- { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 }, { FLM_LRN_DATA_SCRUB_PROF, 4, 712, 0x0000 },
{ FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
{ FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
{ FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
@@ -782,6 +782,18 @@ static nthw_fpga_field_init_s flm_scan_fields[] = {
{ FLM_SCAN_I, 16, 0, 0 },
};
+static nthw_fpga_field_init_s flm_scrub_ctrl_fields[] = {
+ { FLM_SCRUB_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_SCRUB_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scrub_data_fields[] = {
+ { FLM_SCRUB_DATA_DEL, 1, 12, 0 },
+ { FLM_SCRUB_DATA_INF, 1, 13, 0 },
+ { FLM_SCRUB_DATA_R, 4, 8, 0 },
+ { FLM_SCRUB_DATA_T, 8, 0, 0 },
+};
+
static nthw_fpga_field_init_s flm_status_fields[] = {
{ FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
{ FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
@@ -921,6 +933,8 @@ static nthw_fpga_register_init_s flm_registers[] = {
{ FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
{ FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
{ FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_SCRUB_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_scrub_ctrl_fields },
+ { FLM_SCRUB_DATA, 11, 14, NTHW_FPGA_REG_TYPE_WO, 0, 4, flm_scrub_data_fields },
{ FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
{ FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
{ FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
@@ -3058,6 +3072,7 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
+ { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 9c286a4f35..263b3ee7d4 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -47,9 +47,6 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1)
#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1)
-/* Max RSS queues */
-#define MAX_QUEUES 125
-
#define NUM_VQ_SEGS(_data_size_) \
({ \
size_t _size = (_data_size_); \
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 64/73] net/ntnic: update documentation
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (62 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 63/73] net/ntnic: add scrub registers Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
` (9 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Update required documentation
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 2c160ae592..e7e1cbcff7 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -40,6 +40,36 @@ Features
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
+- Multiple TX and RX queues.
+- Scattered and gather for TX and RX.
+- RSS hash
+- RSS key update
+- RSS based on VLAN or 5-tuple.
+- RSS using different combinations of fields: L3 only, L4 only or both, and
+ source only, destination only or both.
+- Several RSS hash keys, one for each flow type.
+- Default RSS operation with no hash key specification.
+- VLAN filtering.
+- RX VLAN stripping via raw decap.
+- TX VLAN insertion via raw encap.
+- Flow API.
+- Multiple process.
+- Tunnel types: GTP.
+- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
+ verification.
+- Support for multiple rte_flow groups.
+- Encapsulation and decapsulation of GTP data.
+- Packet modification: NAT, TTL decrement, DSCP tagging
+- Traffic mirroring.
+- Jumbo frame support.
+- Port and queue statistics.
+- RMON statistics in extended stats.
+- Flow metering, including meter policy API.
+- Link state information.
+- CAM and TCAM based matching.
+- Exact match of 140 million flows and policies.
+- Basic stats
+- Extended stats
Limitations
~~~~~~~~~~~
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 65/73] net/ntnic: added flow aged APIs
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (63 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 64/73] net/ntnic: update documentation Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
` (8 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
FLow aged API was added to the flow_filter_ops.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 71 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 88 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++
3 files changed, 182 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index efe9a1a3b9..b101a9462e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,70 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+static int flow_get_aged_flows(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline_ops uninitialized");
+ return -1;
+ }
+
+ if (nb_contexts > 0 && !context) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "rte_flow_get_aged_flows - empty context";
+ return -1;
+ }
+
+ return profile_inline_ops->flow_get_aged_flows_profile_inline(dev, caller_id, context,
+ nb_contexts, error);
+}
+
+static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_info;
+ (void)queue_info;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_attr;
+ (void)queue_attr;
+ (void)nb_queue;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1076,6 +1140,13 @@ static const struct flow_filter_ops ops = {
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
+ .flow_get_aged_flows = flow_get_aged_flows,
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ .flow_info_get = flow_info_get,
+ .flow_configure = flow_configure,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index e2fce02afa..9f8670b32d 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -718,6 +718,91 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_get_aged_flows(internals->flw_dev, caller_id, context,
+ nb_contexts, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+/*
+ * NT Flow asynchronous operations API
+ */
+
+static int eth_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_info_get(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (struct rte_flow_port_info *)port_info,
+ (struct rte_flow_queue_info *)queue_info,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_configure(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (const struct rte_flow_port_attr *)port_attr,
+ nb_queue,
+ (const struct rte_flow_queue_attr **)queue_attr,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -844,6 +929,9 @@ static const struct rte_flow_ops dev_flow_ops = {
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
+ .get_aged_flows = eth_flow_get_aged_flows,
+ .info_get = eth_flow_info_get,
+ .configure = eth_flow_configure,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 7325bd1ea8..a199aff61f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -280,6 +280,12 @@ struct profile_inline_ops {
uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_aged_flows_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -348,6 +354,23 @@ struct flow_filter_ops {
struct rte_flow_error *error);
int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+ int (*flow_get_aged_flows)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
/*
* Other
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 66/73] net/ntnic: add aged API to the inline profile
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (64 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
` (7 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Added implementation for flow get aged API.
Module which operate with age queue was extended with
get, count and size operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../flow_api/profile_inline/flm_age_queue.c | 49 ++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 24 +++++++++
.../profile_inline/flow_api_profile_inline.c | 51 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 6 +++
5 files changed, 131 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index c0b7729929..8c6d02a5ec 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -58,6 +58,7 @@ sources = files(
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_age_queue.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
new file mode 100644
index 0000000000..f6f04009fe
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -0,0 +1,49 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <rte_ring.h>
+
+#include "ntlog.h"
+#include "flm_age_queue.h"
+
+/* Queues for flm aged events */
+static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue empty");
+
+ return ret;
+ }
+
+ return -ENOENT;
+}
+
+unsigned int flm_age_queue_count(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_count(age_queue[caller_id]);
+
+ return ret;
+}
+
+unsigned int flm_age_queue_get_size(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_get_size(age_queue[caller_id]);
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
new file mode 100644
index 0000000000..d61609cc01
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -0,0 +1,24 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_AGE_QUEUE_H_
+#define _FLM_AGE_QUEUE_H_
+
+#include "stdint.h"
+
+struct flm_age_event_s {
+ void *context;
+};
+
+/* Max number of event queues */
+#define MAX_EVT_AGE_QUEUES 256
+
+#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
+unsigned int flm_age_queue_count(uint16_t caller_id);
+unsigned int flm_age_queue_get_size(uint16_t caller_id);
+
+#endif /* _FLM_AGE_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a1cba7f4c7..f0a8956b04 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -7,6 +7,7 @@
#include "nt_util.h"
#include "hw_mod_backend.h"
+#include "flm_age_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -4395,6 +4396,55 @@ static void dump_flm_data(const uint32_t *data, FILE *file)
}
}
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ unsigned int queue_size = flm_age_queue_get_size(caller_id);
+
+ if (queue_size == 0) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size is not configured";
+ return -1;
+ }
+
+ unsigned int queue_count = flm_age_queue_count(caller_id);
+
+ if (context == NULL)
+ return queue_count;
+
+ if (queue_count < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size contains fewer records than the expected output";
+ return -1;
+ }
+
+ if (queue_size < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Defined aged queue size is smaller than the expected output";
+ return -1;
+ }
+
+ uint32_t idx;
+
+ for (idx = 0; idx < nb_contexts; ++idx) {
+ struct flm_age_event_s obj;
+ int ret = flm_age_queue_get(caller_id, &obj);
+
+ if (ret != 0)
+ break;
+
+ context[idx] = obj.context;
+ }
+
+ return idx;
+}
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -4523,6 +4573,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b44d3a7291..e1934bc6a6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -48,6 +48,12 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
FILE *file,
struct rte_flow_error *error);
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 67/73] net/ntnic: add info and configure flow API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (65 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 68/73] net/ntnic: add aged flow event Serhii Iliushyk
` (6 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with flow info and create APIS.
Module which operate with age queue was extended with
create and free operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
v2
* Fix usage of the rte_atomic
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 19 +----
.../flow_api/profile_inline/flm_age_queue.c | 79 +++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 5 ++
.../profile_inline/flow_api_profile_inline.c | 59 ++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 9 +++
drivers/net/ntnic/ntnic_mod_reg.h | 9 +++
7 files changed, 168 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index ed96f77bc0..89f071d982 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -77,6 +77,9 @@ struct flow_eth_dev {
/* QSL_HSH index if RSS needed QSL v6+ */
int rss_target_id;
+ /* The size of buffer for aged out flow list */
+ uint32_t nb_aging_objects;
+
struct flow_eth_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index b101a9462e..5349dc84ab 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1075,12 +1075,6 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_info;
- (void)queue_info;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1088,20 +1082,14 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_info_get_profile_inline(dev, caller_id, port_info,
+ queue_info, error);
}
static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_attr;
- (void)queue_attr;
- (void)nb_queue;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1109,7 +1097,8 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_configure_profile_inline(dev, caller_id, port_attr,
+ nb_queue, queue_attr, error);
}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index f6f04009fe..cdc7223d51 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -4,12 +4,91 @@
*/
#include <rte_ring.h>
+#include <rte_errno.h>
+#include <rte_stdatomic.h>
+#include <stdint.h>
#include "ntlog.h"
#include "flm_age_queue.h"
/* Queues for flm aged events */
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+
+void flm_age_queue_free(uint8_t port, uint16_t caller_id)
+{
+ struct rte_ring *q = NULL;
+
+ if (port < MAX_EVT_AGE_PORTS)
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ q = age_queue[caller_id];
+ age_queue[caller_id] = NULL;
+ }
+
+ if (q != NULL)
+ rte_ring_free(q);
+}
+
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
+{
+ char name[20];
+ struct rte_ring *q = NULL;
+
+ if (rte_is_power_of_2(count) == false || count > RTE_RING_SZ_MASK) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue number of elements (%u) is invalid, must be power of 2, and not exceed %u",
+ count,
+ RTE_RING_SZ_MASK);
+ return NULL;
+ }
+
+ if (port >= MAX_EVT_AGE_PORTS) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_EVT_AGE_PORTS - 1);
+ return NULL;
+ }
+
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+
+ if (caller_id >= MAX_EVT_AGE_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for caller_id %u. Max supported caller_id is %u",
+ caller_id,
+ MAX_EVT_AGE_QUEUES - 1);
+ return NULL;
+ }
+
+ if (age_queue[caller_id] != NULL) {
+ NT_LOG(DBG, FILTER, "FLM aged event queue %u already created", caller_id);
+ return age_queue[caller_id];
+ }
+
+ snprintf(name, 20, "AGE_EVENT%u", caller_id);
+ q = rte_ring_create_elem(name,
+ FLM_AGE_ELEM_SIZE,
+ count,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created due to error %02X",
+ rte_errno);
+ return NULL;
+ }
+
+ age_queue[caller_id] = q;
+
+ return q;
+}
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index d61609cc01..9ff6ef6de0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -15,8 +15,13 @@ struct flm_age_event_s {
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
+/* Max number of event ports */
+#define MAX_EVT_AGE_PORTS 128
+
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f0a8956b04..7efdb76600 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4559,6 +4559,63 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ (void)queue_info;
+ (void)caller_id;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+ memset(port_info, 0, sizeof(struct rte_flow_port_info));
+
+ port_info->max_nb_aging_objects = dev->nb_aging_objects;
+
+ return res;
+}
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ (void)nb_queue;
+ (void)queue_attr;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (port_attr->nb_aging_objects > 0) {
+ if (dev->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ struct rte_ring *age_queue =
+ flm_age_queue_create(dev->port_id, caller_id, port_attr->nb_aging_objects);
+
+ if (age_queue == NULL) {
+ error->message = "Failed to allocate aging objects";
+ goto error_out;
+ }
+
+ dev->nb_aging_objects = port_attr->nb_aging_objects;
+ }
+
+ return res;
+
+error_out:
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+
+ if (port_attr->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ return -1;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4580,6 +4637,8 @@ static const struct profile_inline_ops ops = {
* Stats
*/
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
+ .flow_info_get_profile_inline = flow_info_get_profile_inline,
+ .flow_configure_profile_inline = flow_configure_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e1934bc6a6..ea1d9c31b2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -64,4 +64,13 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index a199aff61f..029b0ac4eb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -309,6 +309,15 @@ struct profile_inline_ops {
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
+
+ int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 68/73] net/ntnic: add aged flow event
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (66 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 69/73] net/ntnic: add thread termination Serhii Iliushyk
` (5 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Port thread was extended with new age event callback handler.
LRN, INF, STA registers getter setter was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 7 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 16 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 75 +++++++++++
.../flow_api/profile_inline/flm_age_queue.c | 28 ++++
.../flow_api/profile_inline/flm_age_queue.h | 12 ++
.../flow_api/profile_inline/flm_evt_queue.c | 20 +++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 121 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 16 +++
10 files changed, 299 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 9cd9d92823..92e1205640 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be);
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
@@ -695,6 +698,10 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
uint32_t *sta_word_cnt);
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 5635ac4524..a3f5e1d7f7 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -129,3 +129,19 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
pthread_mutex_unlock(&handle->mtx);
}
+
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
+
+ *caller_id = element->caller_id;
+ *type = element->type;
+ memcpy(flm_h, &element->handle, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index e190fe4a11..edb4f42729 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -20,4 +20,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
uint8_t type);
void ntnic_id_table_free_id(void *id_table, uint32_t id);
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 1845f74166..996abfb28d 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,52 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_buf_ctrl_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_buf_ctrl_mod_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value)
+{
+ int get = 1; /* Only get supported */
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_BUF_CTRL_LRN_FREE:
+ GET_SET(be->flm.v25.buf_ctrl->lrn_free, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_INF_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->inf_avail, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_STA_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->sta_avail, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_buf_ctrl_mod_get(be, field, value);
+}
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
{
return be->iface->flm_stat_update(be->be_dev, &be->flm);
@@ -887,3 +933,32 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
return ret;
}
+
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_INF_STA_DATA:
+ be->iface->flm_inf_sta_data_update(be->be_dev, &be->flm, inf_value,
+ inf_size, inf_word_cnt, sta_value,
+ sta_size, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index cdc7223d51..51126dfded 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -15,6 +15,21 @@
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+__rte_always_inline int flm_age_event_get(uint8_t port)
+{
+ return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_set(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_clear(uint8_t port)
+{
+ rte_atomic_flag_clear_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
void flm_age_queue_free(uint8_t port, uint16_t caller_id)
{
struct rte_ring *q = NULL;
@@ -90,6 +105,19 @@ struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned
return q;
}
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue full");
+ }
+}
+
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 9ff6ef6de0..27154836c5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -12,6 +12,14 @@ struct flm_age_event_s {
void *context;
};
+/* Indicates why the flow info record was generated */
+#define INF_DATA_CAUSE_SW_UNLEARN 0
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED 1
+#define INF_DATA_CAUSE_NA 2
+#define INF_DATA_CAUSE_PERIODIC_FLOW_INFO 3
+#define INF_DATA_CAUSE_SW_PROBE 4
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT 5
+
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
@@ -20,8 +28,12 @@ struct flm_age_event_s {
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+int flm_age_event_get(uint8_t port);
+void flm_age_event_set(uint8_t port);
+void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 98b0e8347a..db9687714f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -138,6 +138,26 @@ static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
return q;
}
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
+{
+ struct rte_ring **stat_q = remote ? stat_q_remote : stat_q_local;
+
+ if (port >= (remote ? MAX_STAT_RMT_QUEUES : MAX_STAT_LCL_QUEUES))
+ return -1;
+
+ if (stat_q[port] == NULL) {
+ if (flm_evt_queue_create(port, remote ? FLM_STAT_REMOTE : FLM_STAT_LOCAL) == NULL)
+ return -1;
+ }
+
+ if (rte_ring_sp_enqueue_elem(stat_q[port], obj, FLM_STAT_ELEM_SIZE) != 0) {
+ NT_LOG(DBG, FILTER, "FLM local status queue full");
+ return -1;
+ }
+
+ return 0;
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 238be7a3b2..3a61f844b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,5 +48,6 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7efdb76600..21d8ed4ca9 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,7 @@
#include "hw_mod_backend.h"
#include "flm_age_queue.h"
+#include "flm_evt_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -20,6 +21,13 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define DMA_BLOCK_SIZE 256
+#define DMA_OVERHEAD 20
+#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
+#define MAX_STA_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_STA_DATA)
+#define WORDS_PER_INF_DATA (sizeof(struct flm_v25_inf_data_s) / sizeof(uint32_t))
+#define MAX_INF_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_INF_DATA)
+
#define NT_FLM_MISS_FLOW_TYPE 0
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
@@ -71,14 +79,127 @@ static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
return r.num;
}
+static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
+{
+ if (caller_id < MAX_VDPA_PORTS + 1) {
+ *port = caller_id;
+ return true;
+ }
+
+ *port = caller_id - MAX_VDPA_PORTS - 1;
+ return false;
+}
+
+static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_inf_data_s *inf_data =
+ (struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, inf_data->id, &flm_h, &caller_id,
+ &type);
+
+ /* Check that received record hold valid meter statistics */
+ if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
+
+ age_event.context = fh->context;
+
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
+ break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
+ }
+ }
+ }
+}
+
+static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_sta_data_s *sta_data =
+ (struct flm_v25_sta_data_s *)&data[i * WORDS_PER_STA_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, sta_data->id, &flm_h, &caller_id,
+ &type);
+
+ if (type == 1) {
+ uint8_t port;
+ bool remote_caller = is_remote_caller(caller_id, &port);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+ ((struct flow_handle *)flm_h.p)->learn_ignored = 1;
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ struct flm_status_event_s data = {
+ .flow = flm_h.p,
+ .learn_ignore = sta_data->lis,
+ .learn_failed = sta_data->lfs,
+ };
+
+ flm_sta_queue_put(port, remote_caller, &data);
+ }
+ }
+}
+
static uint32_t flm_update(struct flow_eth_dev *dev)
{
static uint32_t inf_word_cnt;
static uint32_t sta_word_cnt;
+ uint32_t inf_data[DMA_BLOCK_SIZE];
+ uint32_t sta_data[DMA_BLOCK_SIZE];
+
+ if (inf_word_cnt >= WORDS_PER_INF_DATA || sta_word_cnt >= WORDS_PER_STA_DATA) {
+ uint32_t inf_records = inf_word_cnt / WORDS_PER_INF_DATA;
+
+ if (inf_records > MAX_INF_DATA_RECORDS_PER_READ)
+ inf_records = MAX_INF_DATA_RECORDS_PER_READ;
+
+ uint32_t sta_records = sta_word_cnt / WORDS_PER_STA_DATA;
+
+ if (sta_records > MAX_STA_DATA_RECORDS_PER_READ)
+ sta_records = MAX_STA_DATA_RECORDS_PER_READ;
+
+ hw_mod_flm_inf_sta_data_update_get(&dev->ndev->be, HW_FLM_FLOW_INF_STA_DATA,
+ inf_data, inf_records * WORDS_PER_INF_DATA,
+ &inf_word_cnt, sta_data,
+ sta_records * WORDS_PER_STA_DATA,
+ &sta_word_cnt);
+
+ if (inf_records > 0)
+ flm_mtr_read_inf_records(dev, inf_data, inf_records);
+
+ if (sta_records > 0)
+ flm_mtr_read_sta_records(dev, sta_data, sta_records);
+
+ return 1;
+ }
+
if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
return 1;
+ hw_mod_flm_buf_ctrl_update(&dev->ndev->be);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_INF_AVAIL, &inf_word_cnt);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_STA_AVAIL, &sta_word_cnt);
+
return inf_word_cnt + sta_word_cnt;
}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 263b3ee7d4..6cac8da17e 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,7 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_age_queue.h"
#include "profile_inline/flm_evt_queue.h"
#include "rte_pmd_ntnic.h"
@@ -1816,6 +1817,21 @@ THREAD_FUNC port_event_thread_fn(void *context)
}
}
+ /* AGED event */
+ /* Note: RTE_FLOW_PORT_FLAG_STRICT_QUEUE flag is not supported so
+ * event is always generated
+ */
+ int aged_event_count = flm_age_event_get(port_no);
+
+ if (aged_event_count > 0 && eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_FLOW_AGED,
+ NULL);
+ flm_age_event_clear(port_no);
+ do_wait = false;
+ }
+
if (do_wait)
nt_os_wait_usec(10);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 69/73] net/ntnic: add thread termination
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (67 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 68/73] net/ntnic: add aged flow event Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 70/73] net/ntnic: add age documentation Serhii Iliushyk
` (4 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Introduce clear_pdrv to unregister driver
from global tracking.
Modify drv_deinit to call clear_pdirv and ensure
safe termination.
Add flm sta and age event free.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../flow_api/profile_inline/flm_age_queue.c | 10 +++
.../flow_api/profile_inline/flm_age_queue.h | 1 +
.../flow_api/profile_inline/flm_evt_queue.c | 76 +++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 12 +++
5 files changed, 100 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index 51126dfded..f4b071ebbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -46,6 +46,16 @@ void flm_age_queue_free(uint8_t port, uint16_t caller_id)
rte_ring_free(q);
}
+void flm_age_queue_free_all(void)
+{
+ int i;
+ int j;
+
+ for (i = 0; i < MAX_EVT_AGE_PORTS; i++)
+ for (j = 0; j < MAX_EVT_AGE_QUEUES; j++)
+ flm_age_queue_free(i, j);
+}
+
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
{
char name[20];
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 27154836c5..55c410ac86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -32,6 +32,7 @@ int flm_age_event_get(uint8_t port);
void flm_age_event_set(uint8_t port);
void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+void flm_age_queue_free_all(void);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index db9687714f..761609a0ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -25,6 +25,82 @@ static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
/* Remote queues for flm status records */
static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+static void flm_inf_sta_queue_free(uint8_t port, uint8_t caller)
+{
+ struct rte_ring *q = NULL;
+
+ /* If queues is not created, then ignore and return */
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ q = info_q_local[port];
+ info_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ q = info_q_remote[port];
+ info_q_remote[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port < MAX_STAT_LCL_QUEUES && stat_q_local[port] != NULL) {
+ q = stat_q_local[port];
+ stat_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port < MAX_STAT_RMT_QUEUES && stat_q_remote[port] != NULL) {
+ q = stat_q_remote[port];
+ stat_q_remote[port] = NULL;
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ break;
+ }
+
+ if (q)
+ rte_ring_free(q);
+}
+
+void flm_inf_sta_queue_free_all(uint8_t caller)
+{
+ int count = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ count = MAX_INFO_LCL_QUEUES;
+ break;
+
+ case FLM_INFO_REMOTE:
+ count = MAX_INFO_RMT_QUEUES;
+ break;
+
+ case FLM_STAT_LOCAL:
+ count = MAX_STAT_LCL_QUEUES;
+ break;
+
+ case FLM_STAT_REMOTE:
+ count = MAX_STAT_RMT_QUEUES;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ return;
+ }
+
+ for (int i = 0; i < count; i++)
+ flm_inf_sta_queue_free(i, caller);
+}
static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 3a61f844b6..d61b282472 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -47,6 +47,7 @@ enum {
#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+void flm_inf_sta_queue_free_all(uint8_t caller);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 6cac8da17e..15374d3045 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1416,6 +1416,18 @@ drv_deinit(struct drv_s *p_drv)
p_drv->ntdrv.b_shutdown = true;
THREAD_JOIN(p_nt_drv->stat_thread);
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
+ /* Free all local flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_LOCAL);
+ /* Free all remote flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_REMOTE);
+ /* Free all aged flow event queues */
+ flm_age_queue_free_all();
+ }
+
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 70/73] net/ntnic: add age documentation
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (68 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 69/73] net/ntnic: add thread termination Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 71/73] net/ntnic: add meter API Serhii Iliushyk
` (3 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
ntnic.rst document was exntede with age feature specification.
ntnic.ini was extended with rte_flow action age support.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 18 ++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 15 +++++++++------
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 947c7ba3a1..af2981ccf6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -33,6 +33,7 @@ udp = Y
vlan = Y
[rte_flow actions]
+age = Y
drop = Y
jump = Y
mark = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e7e1cbcff7..e5a8d71892 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -148,3 +148,21 @@ FILTER
To enable logging on all levels use wildcard in the following way::
--log-level=pmd.net.ntnic.*,8
+
+Flow Scanner
+------------
+
+Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
+When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
+``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
+
+Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
+
+There are list of characteristics that ``age timeout`` action has:
+ - functions only in group > 0;
+ - flow timeout is specified in seconds;
+ - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
+ - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
+ - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
+ - after flow is aged-out it's not automatically deleted;
+ - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..5be9660287 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -154,12 +154,15 @@ New Features
* **Updated Napatech ntnic net driver [EXPERIMENTAL].**
- * Updated supported version of the FPGA to 9563.55.49.
- * Extended and fixed logging.
- * Added NT flow filter initialization.
- * Added NT flow backend initialization.
- * Added initialization of FPGA modules related to flow HW offload.
- * Added basic handling of the virtual queues.
+ * Update supported version of the FPGA to 9563.55.49
+ * Fix Coverity issues
+ * Fix issues related to release 24.07
+ * Extended and fixed the implementation of the logging
+ * Added NT flow filter init API
+ * Added NT flow backend initialization API
+ * Added initialization of FPGA modules related to flow HW offload
+ * Added basic handling of the virtual queues
+ * Added age rte flow action support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 71/73] net/ntnic: add meter API
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (69 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 70/73] net/ntnic: add age documentation Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 72/73] net/ntnic: add meter module Serhii Iliushyk
` (2 subsequent siblings)
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add meter API and implementation to the profile inline.
management functions were extended with meter flow support.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 +
.../flow_api/profile_inline/flm_evt_queue.c | 21 +
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 534 +++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 34 ++
6 files changed, 578 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 89f071d982..032063712a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -100,6 +100,7 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *flm_mtr_handle;
void *group_handle;
void *hw_db_handle;
void *id_table_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 155a9e1fd6..8f1a6419f3 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -57,6 +57,7 @@ enum res_type_e {
#define MAX_TCAM_START_OFFSETS 4
+#define MAX_FLM_MTRS_SUPPORTED 4
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
@@ -215,6 +216,8 @@ struct nic_flow_def {
uint32_t jump_to_group;
+ uint32_t mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
int full_offload;
/*
@@ -307,6 +310,8 @@ struct flow_handle {
uint32_t flm_db_idx_counter;
uint32_t flm_db_idxs[RES_COUNT];
+ uint32_t flm_mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
uint32_t flm_data[10];
uint8_t flm_prot;
uint8_t flm_kid;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 761609a0ea..d76c7da568 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -234,6 +234,27 @@ int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
return 0;
}
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_local[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM local info queue full");
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_remote[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM remote info queue full");
+ }
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index d61b282472..ee8175cf25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,6 +48,7 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
void flm_inf_sta_queue_free_all(uint8_t caller);
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 21d8ed4ca9..2c55a7c9c2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define FLM_MTR_PROFILE_SIZE 0x100000
+#define FLM_MTR_STAT_SIZE 0x1000000
+#define UINT64_MSB ((uint64_t)1 << 63)
+
#define DMA_BLOCK_SIZE 256
#define DMA_OVERHEAD 20
#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
@@ -46,8 +50,336 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
+#define POLICING_PARAMETER_OFFSET 4096
+#define SIZE_CONVERTER 1099.511627776
+
+struct flm_mtr_stat_s {
+ struct dual_buckets_s *buckets;
+ atomic_uint_fast64_t n_pkt;
+ atomic_uint_fast64_t n_bytes;
+ uint64_t n_pkt_base;
+ uint64_t n_bytes_base;
+ atomic_uint_fast64_t stats_mask;
+ uint32_t flm_id;
+};
+
+struct flm_mtr_shared_stats_s {
+ struct flm_mtr_stat_s *stats;
+ uint32_t size;
+ int shared;
+};
+
+struct flm_flow_mtr_handle_s {
+ struct dual_buckets_s {
+ uint16_t rate_a;
+ uint16_t rate_b;
+ uint16_t size_a;
+ uint16_t size_b;
+ } dual_buckets[FLM_MTR_PROFILE_SIZE];
+
+ struct flm_mtr_shared_stats_s *port_stats[UINT8_MAX];
+};
+
static void *flm_lrn_queue_arr;
+static int flow_mtr_supported(struct flow_eth_dev *dev)
+{
+ return hw_mod_flm_present(&dev->ndev->be) && dev->ndev->be.flm.nb_variant == 2;
+}
+
+static uint64_t flow_mtr_meter_policy_n_max(void)
+{
+ return FLM_MTR_PROFILE_SIZE;
+}
+
+static inline uint64_t convert_policing_parameter(uint64_t value)
+{
+ uint64_t limit = POLICING_PARAMETER_OFFSET;
+ uint64_t shift = 0;
+ uint64_t res = value;
+
+ while (shift < 15 && value >= limit) {
+ limit <<= 1;
+ ++shift;
+ }
+
+ if (shift != 0) {
+ uint64_t tmp = POLICING_PARAMETER_OFFSET * (1 << (shift - 1));
+
+ if (tmp > value) {
+ res = 0;
+
+ } else {
+ tmp = value - tmp;
+ res = tmp >> (shift - 1);
+ }
+
+ if (res >= POLICING_PARAMETER_OFFSET)
+ res = POLICING_PARAMETER_OFFSET - 1;
+
+ res = res | (shift << 12);
+ }
+
+ return res;
+}
+
+static int flow_mtr_set_profile(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a, uint64_t bucket_rate_b,
+ uint64_t bucket_size_b)
+{
+ struct flow_nic_dev *ndev = dev->ndev;
+ struct flm_flow_mtr_handle_s *handle =
+ (struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle;
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ /* Round rates up to nearest 128 bytes/sec and shift to 128 bytes/sec units */
+ bucket_rate_a = (bucket_rate_a + 127) >> 7;
+ bucket_rate_b = (bucket_rate_b + 127) >> 7;
+
+ buckets->rate_a = convert_policing_parameter(bucket_rate_a);
+ buckets->rate_b = convert_policing_parameter(bucket_rate_b);
+
+ /* Round size down to 38-bit int */
+ if (bucket_size_a > 0x3fffffffff)
+ bucket_size_a = 0x3fffffffff;
+
+ if (bucket_size_b > 0x3fffffffff)
+ bucket_size_b = 0x3fffffffff;
+
+ /* Convert size to units of 2^40 / 10^9. Output is a 28-bit int. */
+ bucket_size_a = bucket_size_a / SIZE_CONVERTER;
+ bucket_size_b = bucket_size_b / SIZE_CONVERTER;
+
+ buckets->size_a = convert_policing_parameter(bucket_size_a);
+ buckets->size_b = convert_policing_parameter(bucket_size_b);
+
+ return 0;
+}
+
+static int flow_mtr_set_policy(struct flow_eth_dev *dev, uint32_t policy_id, int drop)
+{
+ (void)dev;
+ (void)policy_id;
+ (void)drop;
+ return 0;
+}
+
+static uint32_t flow_mtr_meters_supported(struct flow_eth_dev *dev, uint8_t caller_id)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ return handle->port_stats[caller_id]->size;
+}
+
+static int flow_mtr_create_meter(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t mtr_id,
+ uint32_t profile_id,
+ uint32_t policy_id,
+ uint64_t stats_mask)
+{
+ (void)policy_id;
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ union flm_handles flm_h;
+ flm_h.idx = mtr_id;
+ uint32_t flm_id = ntnic_id_table_get_id(dev->ndev->id_table_handle, flm_h, caller_id, 2);
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = buckets->rate_a;
+ learn_record->size = buckets->size_a;
+ learn_record->fill = buckets->size_a;
+
+ learn_record->ft_mbr =
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE; /* FT to assign if MBR has been exceeded */
+
+ learn_record->ent = 1;
+ learn_record->op = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ if (stats_mask)
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ mtr_stat[mtr_id].buckets = buckets;
+ mtr_stat[mtr_id].flm_id = flm_id;
+ atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 3;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 0;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ /* Clear statistics so stats_mask prevents updates of counters on deleted meters */
+ atomic_store(&mtr_stat[mtr_id].stats_mask, 0);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, 0);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, 0);
+ mtr_stat[mtr_id].n_bytes_base = 0;
+ mtr_stat[mtr_id].n_pkt_base = 0;
+ mtr_stat[mtr_id].buckets = NULL;
+
+ ntnic_id_table_free_id(dev->ndev->id_table_handle, flm_id);
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = &handle->port_stats[caller_id]->stats[mtr_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = mtr_stat->flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = mtr_stat->buckets->rate_a;
+ learn_record->size = mtr_stat->buckets->size_a;
+ learn_record->adj = adjust_value;
+
+ learn_record->ft_mbr = NT_FLM_VIOLATING_MBR_FLOW_TYPE;
+
+ learn_record->ent = 1;
+ learn_record->op = 2;
+ learn_record->eor = 1;
+
+ if (atomic_load(&mtr_stat->stats_mask))
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
static void flm_setup_queues(void)
{
flm_lrn_queue_arr = flm_lrn_queue_create();
@@ -92,6 +424,8 @@ static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
for (uint32_t i = 0; i < records; ++i) {
struct flm_v25_inf_data_s *inf_data =
(struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
@@ -102,29 +436,62 @@ static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, u
&type);
/* Check that received record hold valid meter statistics */
- if (type == 1) {
- switch (inf_data->cause) {
- case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
- case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
- struct flow_handle *fh = (struct flow_handle *)flm_h.p;
- struct flm_age_event_s age_event;
- uint8_t port;
+ if (type == 2) {
+ uint64_t mtr_id = flm_h.idx;
+
+ if (mtr_id < handle->port_stats[caller_id]->size) {
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[caller_id]->stats;
+
+ /* Don't update a deleted meter */
+ uint64_t stats_mask = atomic_load(&mtr_stat[mtr_id].stats_mask);
+
+ if (stats_mask) {
+ atomic_store(&mtr_stat[mtr_id].n_pkt,
+ inf_data->packets | UINT64_MSB);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, inf_data->bytes);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, inf_data->packets);
+ struct flm_info_event_s stat_data;
+ bool remote_caller;
+ uint8_t port;
+
+ remote_caller = is_remote_caller(caller_id, &port);
+
+ /* Save stat data to flm stat queue */
+ stat_data.bytes = inf_data->bytes;
+ stat_data.packets = inf_data->packets;
+ stat_data.id = mtr_id;
+ stat_data.timestamp = inf_data->ts;
+ stat_data.cause = inf_data->cause;
+ flm_inf_queue_put(port, remote_caller, &stat_data);
+ }
+ }
- age_event.context = fh->context;
+ /* Check that received record hold valid flow data */
- is_remote_caller(caller_id, &port);
+ } else if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
- flm_age_queue_put(caller_id, &age_event);
- flm_age_event_set(port);
- }
- break;
+ age_event.context = fh->context;
- case INF_DATA_CAUSE_SW_UNLEARN:
- case INF_DATA_CAUSE_NA:
- case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
- case INF_DATA_CAUSE_SW_PROBE:
- default:
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
}
}
}
@@ -203,6 +570,42 @@ static uint32_t flm_update(struct flow_eth_dev *dev)
return inf_word_cnt + sta_word_cnt;
}
+static void flm_mtr_read_stats(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ *stats_mask = atomic_load(&mtr_stat[id].stats_mask);
+
+ if (*stats_mask) {
+ uint64_t pkt_1;
+ uint64_t pkt_2;
+ uint64_t nb;
+
+ do {
+ do {
+ pkt_1 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 & UINT64_MSB);
+
+ nb = atomic_load(&mtr_stat[id].n_bytes);
+ pkt_2 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 != pkt_2);
+
+ *green_pkt = pkt_1 - mtr_stat[id].n_pkt_base;
+ *green_bytes = nb - mtr_stat[id].n_bytes_base;
+
+ if (clear) {
+ mtr_stat[id].n_pkt_base = pkt_1;
+ mtr_stat[id].n_bytes_base = nb;
+ }
+ }
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -2511,6 +2914,13 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = fh->dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[fh->caller_id]->stats;
+ fh->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
switch (fd->l4_prot) {
case PROT_L4_TCP:
fh->flm_prot = 6;
@@ -3540,6 +3950,29 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (ndev->id_table_handle == NULL)
goto err_exit0;
+ ndev->flm_mtr_handle = calloc(1, sizeof(struct flm_flow_mtr_handle_s));
+ struct flm_mtr_shared_stats_s *flm_shared_stats =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *flm_stats =
+ calloc(FLM_MTR_STAT_SIZE, sizeof(struct flm_mtr_stat_s));
+
+ if (ndev->flm_mtr_handle == NULL || flm_shared_stats == NULL ||
+ flm_stats == NULL) {
+ free(ndev->flm_mtr_handle);
+ free(flm_shared_stats);
+ free(flm_stats);
+ goto err_exit0;
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ ((struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle)->port_stats[i] =
+ flm_shared_stats;
+ }
+
+ flm_shared_stats->stats = flm_stats;
+ flm_shared_stats->size = FLM_MTR_STAT_SIZE;
+ flm_shared_stats->shared = UINT8_MAX;
+
if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
goto err_exit0;
@@ -3574,6 +4007,27 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ free(ndev->flm_mtr_handle);
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
@@ -4693,6 +5147,11 @@ int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
port_info->max_nb_aging_objects = dev->nb_aging_objects;
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle)
+ port_info->max_nb_meters = mtr_handle->port_stats[caller_id]->size;
+
return res;
}
@@ -4724,6 +5183,35 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
dev->nb_aging_objects = port_attr->nb_aging_objects;
}
+ if (port_attr->nb_meters > 0) {
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle->port_stats[caller_id]->shared == 1) {
+ res = realloc(mtr_handle->port_stats[caller_id]->stats,
+ port_attr->nb_meters) == NULL
+ ? -1
+ : 0;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+
+ } else {
+ mtr_handle->port_stats[caller_id] =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *stats =
+ calloc(port_attr->nb_meters, sizeof(struct flm_mtr_stat_s));
+
+ if (mtr_handle->port_stats[caller_id] == NULL || stats == NULL) {
+ free(mtr_handle->port_stats[caller_id]);
+ free(stats);
+ error->message = "Failed to allocate meter actions";
+ goto error_out;
+ }
+
+ mtr_handle->port_stats[caller_id]->stats = stats;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+ mtr_handle->port_stats[caller_id]->shared = 1;
+ }
+ }
+
return res;
error_out:
@@ -4763,8 +5251,18 @@ static const struct profile_inline_ops ops = {
/*
* NT Flow FLM Meter API
*/
+ .flow_mtr_supported = flow_mtr_supported,
+ .flow_mtr_meter_policy_n_max = flow_mtr_meter_policy_n_max,
+ .flow_mtr_set_profile = flow_mtr_set_profile,
+ .flow_mtr_set_policy = flow_mtr_set_policy,
+ .flow_mtr_create_meter = flow_mtr_create_meter,
+ .flow_mtr_probe_meter = flow_mtr_probe_meter,
+ .flow_mtr_destroy_meter = flow_mtr_destroy_meter,
+ .flm_mtr_adjust_stats = flm_mtr_adjust_stats,
+ .flow_mtr_meters_supported = flow_mtr_meters_supported,
.flm_setup_queues = flm_setup_queues,
.flm_free_queues = flm_free_queues,
+ .flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 029b0ac4eb..503674f4a4 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -303,10 +303,44 @@ struct profile_inline_ops {
uint64_t *data,
uint64_t size);
+ /*
+ * NT Flow FLM Meter API
+ */
+ int (*flow_mtr_supported)(struct flow_eth_dev *dev);
+
+ uint64_t (*flow_mtr_meter_policy_n_max)(void);
+
+ int (*flow_mtr_set_profile)(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a,
+ uint64_t bucket_rate_b, uint64_t bucket_size_b);
+
+ int (*flow_mtr_set_policy)(struct flow_eth_dev *dev, uint32_t policy_id, int drop);
+
+ int (*flow_mtr_create_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t profile_id, uint32_t policy_id, uint64_t stats_mask);
+
+ int (*flow_mtr_probe_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id);
+
+ int (*flow_mtr_destroy_meter)(struct flow_eth_dev *dev, uint8_t caller_id,
+ uint32_t mtr_id);
+
+ int (*flm_mtr_adjust_stats)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value);
+
+ uint32_t (*flow_mtr_meters_supported)(struct flow_eth_dev *dev, uint8_t caller_id);
+
/*
* NT Flow FLM queue API
*/
void (*flm_setup_queues)(void);
+ void (*flm_mtr_read_stats)(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear);
+
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 72/73] net/ntnic: add meter module
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (70 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 71/73] net/ntnic: add meter API Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 73/73] net/ntnic: add meter documentation Serhii Iliushyk
2024-10-22 17:11 ` [PATCH v2 00/73] Provide flow filter API and statistics Stephen Hemminger
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Meter module was added:
1. add/remove profile
2. create/destroy flow
3. add/remove meter policy
4. read/update stats
eth_dev_ops struct was extended with ops above.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/ntos_drv.h | 14 +
drivers/net/ntnic/meson.build | 2 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 11 +-
drivers/net/ntnic/ntnic_mod_reg.c | 18 +
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
6 files changed, 538 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 7b3c8ff3d6..f6ce442d17 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -12,6 +12,7 @@
#include <inttypes.h>
#include <rte_ether.h>
+#include "rte_mtr.h"
#include "stream_binary_flow_api.h"
#include "nthw_drv.h"
@@ -90,6 +91,19 @@ struct __rte_cache_aligned ntnic_tx_queue {
enum fpga_info_profile profile; /* Inline / Capture */
};
+struct nt_mtr_profile {
+ LIST_ENTRY(nt_mtr_profile) next;
+ uint32_t profile_id;
+ struct rte_mtr_meter_profile profile;
+};
+
+struct nt_mtr {
+ LIST_ENTRY(nt_mtr) next;
+ uint32_t mtr_id;
+ int shared;
+ struct nt_mtr_profile *profile;
+};
+
struct pmd_internals {
const struct rte_pci_device *pci_dev;
struct flow_eth_dev *flw_dev;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 8c6d02a5ec..ca46541ef3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -17,6 +17,7 @@ includes = [
include_directories('nthw'),
include_directories('nthw/supported'),
include_directories('nthw/model'),
+ include_directories('nthw/ntnic_meter'),
include_directories('nthw/flow_filter'),
include_directories('nthw/flow_api'),
include_directories('nim/'),
@@ -92,6 +93,7 @@ sources = files(
'nthw/flow_filter/flow_nthw_tx_cpy.c',
'nthw/flow_filter/flow_nthw_tx_ins.c',
'nthw/flow_filter/flow_nthw_tx_rpl.c',
+ 'nthw/ntnic_meter/ntnic_meter.c',
'nthw/model/nthw_fpga_model.c',
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
new file mode 100644
index 0000000000..e4e8fe0c7d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -0,0 +1,483 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_meter.h>
+#include <rte_mtr.h>
+#include <rte_mtr_driver.h>
+#include <rte_malloc.h>
+
+#include "ntos_drv.h"
+#include "ntlog.h"
+#include "nt_util.h"
+#include "ntos_system.h"
+#include "ntnic_mod_reg.h"
+
+static inline uint8_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + (uint8_t)(port & 0x7f) + 1;
+}
+
+struct qos_integer_fractional {
+ uint32_t integer;
+ uint32_t fractional; /* 1/1024 */
+};
+
+/*
+ * Inline FLM metering
+ */
+
+static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
+ struct rte_mtr_capabilities *cap,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (!profile_inline_ops->flow_mtr_supported(internals->flw_dev)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Ethernet device does not support metering");
+ }
+
+ memset(cap, 0x0, sizeof(struct rte_mtr_capabilities));
+
+ /* MBR records use 28-bit integers */
+ cap->n_max = profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id);
+ cap->n_shared_max = cap->n_max;
+
+ cap->identical = 0;
+ cap->shared_identical = 0;
+
+ cap->shared_n_flows_per_mtr_max = UINT32_MAX;
+
+ /* Limited by number of MBR record ids per FLM learn record */
+ cap->chaining_n_mtrs_per_flow_max = 4;
+
+ cap->chaining_use_prev_mtr_color_supported = 0;
+ cap->chaining_use_prev_mtr_color_enforced = 0;
+
+ cap->meter_rate_max = (uint64_t)(0xfff << 0xf) * 1099;
+
+ cap->stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ /* Only color-blind mode is supported */
+ cap->color_aware_srtcm_rfc2697_supported = 0;
+ cap->color_aware_trtcm_rfc2698_supported = 0;
+ cap->color_aware_trtcm_rfc4115_supported = 0;
+
+ /* Focused on RFC2698 for now */
+ cap->meter_srtcm_rfc2697_n_max = 0;
+ cap->meter_trtcm_rfc2698_n_max = cap->n_max;
+ cap->meter_trtcm_rfc4115_n_max = 0;
+
+ cap->meter_policy_n_max = profile_inline_ops->flow_mtr_meter_policy_n_max();
+
+ /* Byte mode is supported */
+ cap->srtcm_rfc2697_byte_mode_supported = 0;
+ cap->trtcm_rfc2698_byte_mode_supported = 1;
+ cap->trtcm_rfc4115_byte_mode_supported = 0;
+
+ /* Packet mode not supported */
+ cap->srtcm_rfc2697_packet_mode_supported = 0;
+ cap->trtcm_rfc2698_packet_mode_supported = 0;
+ cap->trtcm_rfc4115_packet_mode_supported = 0;
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (profile->packet_mode != 0) {
+ return -rte_mtr_error_set(error, EINVAL,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_PACKET_MODE, NULL,
+ "Profile packet mode not supported");
+ }
+
+ if (profile->alg == RTE_MTR_SRTCM_RFC2697) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 2697 not supported");
+ }
+
+ if (profile->alg == RTE_MTR_TRTCM_RFC4115) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 4115 not supported");
+ }
+
+ if (profile->trtcm_rfc2698.cir != profile->trtcm_rfc2698.pir ||
+ profile->trtcm_rfc2698.cbs != profile->trtcm_rfc2698.pbs) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile committed and peak rates must be equal");
+ }
+
+ int res = profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id,
+ profile->trtcm_rfc2698.cir,
+ profile->trtcm_rfc2698.cbs, 0, 0);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile could not be added.");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id, 0, 0, 0, 0);
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t policy_id,
+ struct rte_mtr_meter_policy_params *policy,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ const struct rte_flow_action *actions = policy->actions[RTE_COLOR_GREEN];
+ int green_action_supported = (actions[0].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_VOID &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END);
+
+ actions = policy->actions[RTE_COLOR_YELLOW];
+ int yellow_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ actions = policy->actions[RTE_COLOR_RED];
+ int red_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ if (green_action_supported == 0 || yellow_action_supported == 0 ||
+ red_action_supported == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Unsupported meter policy actions");
+ }
+
+ if (profile_inline_ops->flow_mtr_set_policy(internals->flw_dev, policy_id, 1)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Policy could not be added");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_delete_inline(struct rte_eth_dev *eth_dev __rte_unused,
+ uint32_t policy_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ return 0;
+}
+
+static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (params->use_prev_mtr_color != 0 || params->dscp_table != NULL) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only color blind mode is supported");
+ }
+
+ uint64_t allowed_stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ if ((params->stats_mask & ~allowed_stats_mask) != 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Requested color stats not supported");
+ }
+
+ if (params->meter_enable == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Disabled meters not supported");
+ }
+
+ if (shared == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only shared mtrs are supported");
+ }
+
+ if (params->meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (params->meter_policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ int res = profile_inline_ops->flow_mtr_create_meter(internals->flw_dev,
+ caller_id,
+ mtr_id,
+ params->meter_profile_id,
+ params->meter_policy_id,
+ params->stats_mask);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_destroy_meter(internals->flw_dev, caller_id, mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ uint64_t adjust_value,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ const uint64_t adjust_bit = 1ULL << 63;
+ const uint64_t probe_bit = 1ULL << 62;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (adjust_value & adjust_bit) {
+ adjust_value &= adjust_bit - 1;
+
+ if (adjust_value > (uint64_t)UINT32_MAX) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "Adjust value is out of range");
+ }
+
+ if (profile_inline_ops->flm_mtr_adjust_stats(internals->flw_dev, caller_id, mtr_id,
+ (uint32_t)adjust_value)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to adjust offloaded MTR");
+ }
+
+ return 0;
+ }
+
+ if (adjust_value & probe_bit) {
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_probe_meter(internals->flw_dev, caller_id,
+ mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to offload to hardware");
+ }
+
+ return 0;
+ }
+
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Use of meter stats update requires bit 63 or bit 62 of \"stats_mask\" must be 1.");
+}
+
+static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ memset(stats, 0x0, sizeof(struct rte_mtr_stats));
+ profile_inline_ops->flm_mtr_read_stats(internals->flw_dev, caller_id, mtr_id, stats_mask,
+ &stats->n_pkts[RTE_COLOR_GREEN],
+ &stats->n_bytes[RTE_COLOR_GREEN], clear);
+
+ return 0;
+}
+
+/*
+ * Ops setup
+ */
+
+static const struct rte_mtr_ops mtr_ops_inline = {
+ .capabilities_get = eth_mtr_capabilities_get_inline,
+ .meter_profile_add = eth_mtr_meter_profile_add_inline,
+ .meter_profile_delete = eth_mtr_meter_profile_delete_inline,
+ .create = eth_mtr_create_inline,
+ .destroy = eth_mtr_destroy_inline,
+ .meter_policy_add = eth_mtr_meter_policy_add_inline,
+ .meter_policy_delete = eth_mtr_meter_policy_delete_inline,
+ .stats_update = eth_mtr_stats_adjust_inline,
+ .stats_read = eth_mtr_stats_read_inline,
+};
+
+static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
+ enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
+
+ switch (profile) {
+ case FPGA_INFO_PROFILE_INLINE:
+ *(const struct rte_mtr_ops **)ops = &mtr_ops_inline;
+ break;
+
+ case FPGA_INFO_PROFILE_UNKNOWN:
+
+ /* fallthrough */
+ case FPGA_INFO_PROFILE_CAPTURE:
+
+ /* fallthrough */
+ default:
+ NT_LOG(ERR, NTHW, "" PCIIDENT_PRINT_STR ": fpga profile not supported",
+ PCIIDENT_TO_DOMAIN(p_nt_drv->pciident),
+ PCIIDENT_TO_BUSNR(p_nt_drv->pciident),
+ PCIIDENT_TO_DEVNR(p_nt_drv->pciident),
+ PCIIDENT_TO_FUNCNR(p_nt_drv->pciident));
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct meter_ops_s meter_ops = {
+ .eth_mtr_ops_get = eth_mtr_ops_get,
+};
+
+void meter_init(void)
+{
+ NT_LOG(DBG, NTNIC, "Meter ops initialized");
+ register_meter_ops(&meter_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 15374d3045..f7503b62ab 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1690,7 +1690,7 @@ static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_con
return 0;
}
-static const struct eth_dev_ops nthw_eth_dev_ops = {
+struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
.dev_stop = eth_dev_stop,
@@ -1713,6 +1713,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .mtr_ops_get = NULL,
.flow_ops_get = dev_flow_ops_get,
.xstats_get = eth_xstats_get,
.xstats_get_names = eth_xstats_get_names,
@@ -2176,6 +2177,14 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ const struct meter_ops_s *meter_ops = get_meter_ops();
+
+ if (meter_ops != NULL)
+ nthw_eth_dev_ops.mtr_ops_get = meter_ops->eth_mtr_ops_get;
+
+ else
+ NT_LOG(DBG, NTNIC, "Meter module is not initialized");
+
/* Initialize the queue system */
if (err == 0) {
sg_ops = get_sg_ops();
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 6737d18a6f..8d4a11feba 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,24 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+/*
+ *
+ */
+static struct meter_ops_s *meter_ops;
+
+void register_meter_ops(struct meter_ops_s *ops)
+{
+ meter_ops = ops;
+}
+
+const struct meter_ops_s *get_meter_ops(void)
+{
+ if (meter_ops == NULL)
+ meter_init();
+
+ return meter_ops;
+}
+
static const struct ntnic_filter_ops *ntnic_filter_ops;
void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 503674f4a4..147d8b2acb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -9,6 +9,8 @@
#include <stdint.h>
#include "rte_ethdev.h"
+#include "rte_mtr_driver.h"
+
#include "rte_flow_driver.h"
#include "flow_api.h"
@@ -115,6 +117,15 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+/* Meter ops section */
+struct meter_ops_s {
+ int (*eth_mtr_ops_get)(struct rte_eth_dev *eth_dev, void *ops);
+};
+
+void register_meter_ops(struct meter_ops_s *ops);
+const struct meter_ops_s *get_meter_ops(void);
+void meter_init(void);
+
struct ntnic_filter_ops {
int (*poll_statistics)(struct pmd_internals *internals);
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v2 73/73] net/ntnic: add meter documentation
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (71 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 72/73] net/ntnic: add meter module Serhii Iliushyk
@ 2024-10-22 16:55 ` Serhii Iliushyk
2024-10-22 17:11 ` [PATCH v2 00/73] Provide flow filter API and statistics Stephen Hemminger
73 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-22 16:55 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
ntnic.ini was extended with rte_flow action meter support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 3 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index af2981ccf6..ecb0605de6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -43,3 +43,4 @@ queue = Y
raw_decap = Y
raw_encap = Y
rss = Y
+meter = Y
\ No newline at end of file
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e5a8d71892..4ae94b161c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -70,6 +70,7 @@ Features
- Exact match of 140 million flows and policies.
- Basic stats
- Extended stats
+- Flow metering, including meter policy API.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 5be9660287..b4a0bdf245 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -163,6 +163,7 @@ New Features
* Added initialization of FPGA modules related to flow HW offload
* Added basic handling of the virtual queues
* Added age rte flow action support
+ * Added meter flow metering and flow policy support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v2 00/73] Provide flow filter API and statistics
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (72 preceding siblings ...)
2024-10-22 16:55 ` [PATCH v2 73/73] net/ntnic: add meter documentation Serhii Iliushyk
@ 2024-10-22 17:11 ` Stephen Hemminger
73 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-22 17:11 UTC (permalink / raw)
To: Serhii Iliushyk; +Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit
On Tue, 22 Oct 2024 18:54:17 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> Disclaimer: This email and any files transmitted with it may contain confidential information intended for the addressee(s) only. The information is not to be surrendered or copied to unauthorized persons. If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.
Please fix your mail system.
If the DPDK users were to follow this advice as exactly worded by your lawyers,
then all these emails would be have to be dropped since we are not authorized
by napatech.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation
2024-10-22 16:54 ` [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-22 17:17 ` Stephen Hemminger
0 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-22 17:17 UTC (permalink / raw)
To: Serhii Iliushyk; +Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit
On Tue, 22 Oct 2024 18:54:24 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
> index 790b2f6b03..748da89262 100644
> --- a/drivers/net/ntnic/include/flow_api.h
> +++ b/drivers/net/ntnic/include/flow_api.h
> @@ -61,6 +61,10 @@ struct flow_nic_dev {
> void *km_res_handle;
> void *kcc_res_handle;
>
> + void *group_handle;
> + void *hw_db_handle;
> + void *id_table_handle;
> +
Use of untyped pointers (void *) can lead to errors, it would have been better
to make these struct pointers.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows
2024-10-22 16:54 ` [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-22 17:20 ` Stephen Hemminger
2024-10-23 16:09 ` Serhii Iliushyk
0 siblings, 1 reply; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-22 17:20 UTC (permalink / raw)
To: Serhii Iliushyk; +Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit
On Tue, 22 Oct 2024 18:54:25 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
> index 748da89262..667dad6d5f 100644
> --- a/drivers/net/ntnic/include/flow_api.h
> +++ b/drivers/net/ntnic/include/flow_api.h
> @@ -68,6 +68,9 @@ struct flow_nic_dev {
> uint32_t flow_unique_id_counter;
> /* linked list of all flows created on this NIC */
> struct flow_handle *flow_base;
> + /* linked list of all FLM flows created on this NIC */
> + struct flow_handle *flow_base_flm;
> + pthread_mutex_t flow_mtx;
Use of pthread_mutex makes the driver unportable to Windows, and
will block the the thread in case of contention. And it will not
handle the case of primary/secondary process.
Prefer use of DPDK spinlock if possible.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows
2024-10-22 17:20 ` Stephen Hemminger
@ 2024-10-23 16:09 ` Serhii Iliushyk
0 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:09 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Mykola Kostenok, Christian Koue Muf, andrew.rybchenko, ferruh.yigit
>On 22.10.2024, 20:21, "Stephen Hemminger" wrote:
>
>
>On Tue, 22 Oct 2024 18:54:25 +0200
>Serhii Iliushyk <sil-plv@napatech.com <mailto:sil-plv@napatech.com> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com>> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com>>> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com>> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com> <mailto:sil-plv@napatech.com <mailto:sil-plv@napatech.com>>>>> wrote:
>
>
>> diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
>> index 748da89262..667dad6d5f 100644
>> --- a/drivers/net/ntnic/include/flow_api.h
>> +++ b/drivers/net/ntnic/include/flow_api.h
>> @@ -68,6 +68,9 @@ struct flow_nic_dev {
>> uint32_t flow_unique_id_counter;
>> /* linked list of all flows created on this NIC */
>> struct flow_handle *flow_base;
>> + /* linked list of all FLM flows created on this NIC */
>> + struct flow_handle *flow_base_flm;
>> + pthread_mutex_t flow_mtx;
>
>
>
>
>Use of pthread_mutex makes the driver unportable to Windows, and
>will block the the thread in case of contention. And it will not
>handle the case of primary/secondary process.
>
>
>Prefer use of DPDK spinlock if possible.
>
Hi Stephen!
The current version of our PMD supports only Linux x86_64 platforms.
Due to this, we have added a special condition to the meson.build file:
```
if not is_linux or not dpdk_conf.has('RTE_ARCH_X86_64')
build = false
reason = 'only supported on x86_64 Linux'
subdir_done()
endif
```
We prefer to use pthread for the current patch set and fix it later.
Best regards,
Serhii
NOTE: Please ignore the disclaimer. We are working on fix it.
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 00/73] Provide flow filter API and statistics
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (73 preceding siblings ...)
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
` (72 more replies)
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
76 siblings, 73 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The list of updates provided by the patchset:
* Multiple TX and RX queues.
* Scattered and gather for TX and RX.
* RSS hash
* RSS key update
* RSS based on VLAN or 5-tuple.
* RSS using different combinations of fields: L3 only,
L4 only or both, and source only, destination only or both.
* Several RSS hash keys, one for each flow type.
* Default RSS operation with no hash key specification.
* VLAN filtering.
* RX VLAN stripping via raw decap.
* TX VLAN insertion via raw encap.
* Flow API.
* Multiple process.
* Tunnel types: GTP.
* Tunnel HW offload: Packet type, inner/outer RSS,
IP and UDP checksum verification.
* Support for multiple rte_flow groups.
* Encapsulation and decapsulation of GTP data.
* Packet modification: NAT, TTL decrement, DSCP tagging
* Traffic mirroring.
* Jumbo frame support.
* Port and queue statistics.
* RMON statistics in extended stats.
* Flow metering, including meter policy API.
* Link state information.
* CAM and TCAM based matching.
* Exact match of 140 million flows and policies.
* Basic stats
* Extended stats
* Flow metering, including meter policy API.
NOTE: Please ignore the disclaimer. We are working on fix it.
Danylo Vodopianov (36):
net/ntnic: add API for configuration NT flow dev
net/ntnic: add item UDP
net/ntnic: add action TCP
net/ntnic: add action VLAN
net/ntnic: add item SCTP
net/ntnic: add items IPv6 and ICMPv6
net/ntnic: add action modify filed
net/ntnic: add items gtp and actions raw encap/decap
net/ntnic: add cat module
net/ntnic: add SLC LR module
net/ntnic: add PDB module
net/ntnic: add QSL module
net/ntnic: add KM module
net/ntnic: add hash API
net/ntnic: add TPE module
net/ntnic: add FLM module
net/ntnic: add flm rcp module
net/ntnic: add learn flow queue handling
net/ntnic: match and action db attributes were added
net/ntnic: add statistics API
net/ntnic: add rpf module
net/ntnic: add statistics poll
net/ntnic: added flm stat interface
net/ntnic: add tsm module
net/ntnic: add xstats
net/ntnic: added flow statistics
net/ntnic: add scrub registers
net/ntnic: add flow aging API
net/ntnic: add aging API to the inline profile
net/ntnic: add flow info and flow configure APIs
net/ntnic: add flow aging event
net/ntnic: add termination thread
net/ntnic: add aging documentation
net/ntnic: add meter API
net/ntnic: add meter module
net/ntnic: update meter documentation
Oleksandr Kolomeiets (17):
net/ntnic: add flow dump feature
net/ntnic: add flow flush
net/ntnic: sort FPGA registers alphanumerically
net/ntnic: add MOD CSU
net/ntnic: add MOD FLM
net/ntnic: add HFU module
net/ntnic: add IFR module
net/ntnic: add MAC Rx module
net/ntnic: add MAC Tx module
net/ntnic: add RPP LR module
net/ntnic: add MOD SLC LR
net/ntnic: add Tx CPY module
net/ntnic: add Tx INS module
net/ntnic: add Tx RPL module
net/ntnic: add STA module
net/ntnic: add TSM module
net/ntnic: update documentation
Serhii Iliushyk (20):
net/ntnic: add flow filter API
net/ntnic: add minimal create/destroy flow operations
net/ntnic: add internal flow create/destroy API
net/ntnic: add minimal NT flow inline profile
net/ntnic: add management API for NT flow profile
net/ntnic: add NT flow profile management implementation
net/ntnic: add create/destroy implementation for NT flows
net/ntnic: add infrastructure for for flow actions and items
net/ntnic: add action queue
net/ntnic: add action mark
net/ntnic: add ation jump
net/ntnic: add action drop
net/ntnic: add item eth
net/ntnic: add item IPv4
net/ntnic: add item ICMP
net/ntnic: add item port ID
net/ntnic: add item void
net/ntnic: add GMF (Generic MAC Feeder) module
net/ntnic: update alignment for virt queue structs
net/ntnic: enable RSS feature
doc/guides/nics/features/ntnic.ini | 32 +
doc/guides/nics/ntnic.rst | 49 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 598 ++
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 +-
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 73 +
drivers/net/ntnic/include/flow_api.h | 138 +
drivers/net/ntnic/include/flow_api_engine.h | 328 +
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 252 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 4 +
drivers/net/ntnic/include/ntnic_stat.h | 265 +
drivers/net/ntnic/include/ntos_drv.h | 24 +
.../ntnic/include/stream_binary_flow_api.h | 67 +
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 +
drivers/net/ntnic/meson.build | 20 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 6 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 +
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 30 +
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 +
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 759 +++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 99 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 147 +
.../net/ntnic/nthw/flow_api/flow_id_table.h | 26 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1171 ++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 457 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 723 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 +++
.../flow_api/profile_inline/flm_age_queue.c | 164 +
.../flow_api/profile_inline/flm_age_queue.h | 42 +
.../flow_api/profile_inline/flm_evt_queue.c | 293 +
.../flow_api/profile_inline/flm_evt_queue.h | 55 +
.../flow_api/profile_inline/flm_lrn_queue.c | 70 +
.../flow_api/profile_inline/flm_lrn_queue.h | 25 +
.../profile_inline/flow_api_hw_db_inline.c | 2987 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 392 ++
.../profile_inline/flow_api_profile_inline.c | 5361 +++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 76 +
.../flow_api_profile_inline_config.h | 77 +
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 +
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 498 ++
.../supported/nthw_fpga_9563_055_049_0000.c | 3317 ++++++----
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 11 +-
.../nthw/supported/nthw_fpga_mod_str_map.c | 2 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 5 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 48 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 205 +
drivers/net/ntnic/ntnic_ethdev.c | 744 ++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 940 +++
drivers/net/ntnic/ntnic_mod_reg.c | 96 +
drivers/net/ntnic/ntnic_mod_reg.h | 225 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 +++
drivers/net/ntnic/ntutil/nt_util.h | 12 +
75 files changed, 23963 insertions(+), 1043 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 01/73] net/ntnic: add API for configuration NT flow dev
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 02/73] net/ntnic: add flow filter API Serhii Iliushyk
` (71 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
This API allows to enable of flow profile for NT SmartNIC
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 30 +++
drivers/net/ntnic/include/flow_api_engine.h | 5 +
drivers/net/ntnic/include/ntos_drv.h | 1 +
.../ntnic/include/stream_binary_flow_api.h | 9 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 221 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 22 ++
drivers/net/ntnic/ntnic_mod_reg.c | 5 +
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++
8 files changed, 307 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 984450afdc..c80906ec50 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -34,6 +34,8 @@ struct flow_eth_dev {
struct flow_nic_dev *ndev;
/* NIC port id */
uint8_t port;
+ /* App assigned port_id - may be DPDK port_id */
+ uint32_t port_id;
/* 0th for exception */
struct flow_queue_id_s rx_queue[FLOW_MAX_QUEUES + 1];
@@ -41,6 +43,9 @@ struct flow_eth_dev {
/* VSWITCH has exceptions sent on queue 0 per design */
int num_queues;
+ /* QSL_HSH index if RSS needed QSL v6+ */
+ int rss_target_id;
+
struct flow_eth_dev *next;
};
@@ -48,6 +53,8 @@ struct flow_eth_dev {
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
uint16_t ports; /* number of in-ports addressable on this NIC */
+ /* flow profile this NIC is initially prepared for */
+ enum flow_eth_dev_profile flow_profile;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
@@ -73,6 +80,14 @@ struct flow_nic_dev {
extern const char *dbg_res_descr[];
+#define flow_nic_set_bit(arr, x) \
+ do { \
+ uint8_t *_temp_arr = (arr); \
+ size_t _temp_x = (x); \
+ _temp_arr[_temp_x / 8] = \
+ (uint8_t)(_temp_arr[_temp_x / 8] | (uint8_t)(1 << (_temp_x % 8))); \
+ } while (0)
+
#define flow_nic_unset_bit(arr, x) \
do { \
size_t _temp_x = (x); \
@@ -85,6 +100,18 @@ extern const char *dbg_res_descr[];
(arr[_temp_x / 8] & (uint8_t)(1 << (_temp_x % 8))); \
})
+#define flow_nic_mark_resource_used(_ndev, res_type, index) \
+ do { \
+ struct flow_nic_dev *_temp_ndev = (_ndev); \
+ typeof(res_type) _temp_res_type = (res_type); \
+ size_t _temp_index = (index); \
+ NT_LOG(DBG, FILTER, "mark resource used: %s idx %zu", \
+ dbg_res_descr[_temp_res_type], _temp_index); \
+ assert(flow_nic_is_bit_set(_temp_ndev->res[_temp_res_type].alloc_bm, \
+ _temp_index) == 0); \
+ flow_nic_set_bit(_temp_ndev->res[_temp_res_type].alloc_bm, _temp_index); \
+ } while (0)
+
#define flow_nic_mark_resource_unused(_ndev, res_type, index) \
do { \
typeof(res_type) _temp_res_type = (res_type); \
@@ -97,6 +124,9 @@ extern const char *dbg_res_descr[];
#define flow_nic_is_resource_used(_ndev, res_type, index) \
(!!flow_nic_is_bit_set((_ndev)->res[res_type].alloc_bm, index))
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment);
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index db5e6fe09d..d025677e25 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -41,6 +41,11 @@ enum res_type_e {
RES_INVALID
};
+/*
+ * Flow NIC offload management
+ */
+#define MAX_OUTPUT_DEST (128)
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index d51d1e3677..8fd577dfe3 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -86,6 +86,7 @@ struct __rte_cache_aligned ntnic_tx_queue {
struct pmd_internals {
const struct rte_pci_device *pci_dev;
+ struct flow_eth_dev *flw_dev;
char name[20];
int n_intf_no;
int lpbk_mode;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 10529b8843..47e5353344 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,11 +12,20 @@
#define FLOW_MAX_QUEUES 128
+/*
+ * Flow eth dev profile determines how the FPGA module resources are
+ * managed and what features are available
+ */
+enum flow_eth_dev_profile {
+ FLOW_ETH_DEV_PROFILE_INLINE = 0,
+};
+
struct flow_queue_id_s {
int id;
int hw_id;
};
struct flow_eth_dev; /* port device */
+struct flow_handle;
#endif /* _STREAM_BINARY_FLOW_API_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34e84559eb..f49aca79c1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_nic_setup.h"
#include "ntnic_mod_reg.h"
+#include "flow_api.h"
#include "flow_filter.h"
const char *dbg_res_descr[] = {
@@ -35,6 +36,24 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Resources
+ */
+
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment)
+{
+ for (unsigned int i = 0; i < ndev->res[res_type].resource_count; i += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, i)) {
+ flow_nic_mark_resource_used(ndev, res_type, i);
+ ndev->res[res_type].ref[i] = 1;
+ return i;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
@@ -55,10 +74,60 @@ int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return !!ndev->res[res_type].ref[index];/* if 0 resource has been freed */
}
+/*
+ * Nic port/adapter lookup
+ */
+
+static struct flow_eth_dev *nic_and_port_to_eth_dev(uint8_t adapter_no, uint8_t port)
+{
+ struct flow_nic_dev *nic_dev = dev_base;
+
+ while (nic_dev) {
+ if (nic_dev->adapter_no == adapter_no)
+ break;
+
+ nic_dev = nic_dev->next;
+ }
+
+ if (!nic_dev)
+ return NULL;
+
+ struct flow_eth_dev *dev = nic_dev->eth_base;
+
+ while (dev) {
+ if (port == dev->port)
+ return dev;
+
+ dev = dev->next;
+ }
+
+ return NULL;
+}
+
+static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
+{
+ struct flow_nic_dev *ndev = dev_base;
+
+ while (ndev) {
+ if (adapter_no == ndev->adapter_no)
+ break;
+
+ ndev = ndev->next;
+ }
+
+ return ndev;
+}
+
/*
* Device Management API
*/
+static void nic_insert_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *dev)
+{
+ dev->next = ndev->eth_base;
+ ndev->eth_base = dev;
+}
+
static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *eth_dev)
{
struct flow_eth_dev *dev = ndev->eth_base, *prev = NULL;
@@ -242,6 +311,154 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
return -1;
}
+/*
+ * adapter_no physical adapter no
+ * port_no local port no
+ * alloc_rx_queues number of rx-queues to allocate for this eth_dev
+ */
+static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no, uint32_t port_id,
+ int alloc_rx_queues, struct flow_queue_id_s queue_ids[],
+ int *rss_target_id, enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+
+ int i;
+ struct flow_eth_dev *eth_dev = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "Get eth-port adapter %i, port %i, port_id %u, rx queues %i, profile %i",
+ adapter_no, port_no, port_id, alloc_rx_queues, flow_profile);
+
+ if (MAX_OUTPUT_DEST < FLOW_MAX_QUEUES) {
+ assert(0);
+ NT_LOG(ERR, FILTER,
+ "ERROR: Internal array for multiple queues too small for API");
+ }
+
+ pthread_mutex_lock(&base_mtx);
+ struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
+
+ if (!ndev) {
+ /* Error - no flow api found on specified adapter */
+ NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
+ adapter_no);
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if (ndev->ports < ((uint16_t)port_no + 1)) {
+ NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
+ NT_LOG(ERR, FILTER,
+ "ERROR: Exceeds supported number of rx queues per eth device");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ /* don't accept multiple eth_dev's on same NIC and same port */
+ eth_dev = nic_and_port_to_eth_dev(adapter_no, port_no);
+
+ if (eth_dev) {
+ NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
+ adapter_no, port_no);
+ pthread_mutex_unlock(&base_mtx);
+ flow_delete_eth_dev(eth_dev);
+ eth_dev = NULL;
+ }
+
+ eth_dev = calloc(1, sizeof(struct flow_eth_dev));
+
+ if (!eth_dev) {
+ NT_LOG(ERR, FILTER, "ERROR: calloc failed");
+ goto err_exit1;
+ }
+
+ pthread_mutex_lock(&ndev->mtx);
+
+ eth_dev->ndev = ndev;
+ eth_dev->port = port_no;
+ eth_dev->port_id = port_id;
+
+ /* Allocate the requested queues in HW for this dev */
+
+ for (i = 0; i < alloc_rx_queues; i++) {
+#ifdef SCATTER_GATHER
+ eth_dev->rx_queue[i] = queue_ids[i];
+#else
+ int queue_id = flow_nic_alloc_resource(ndev, RES_QUEUE, 1);
+
+ if (queue_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: no more free queue IDs in NIC");
+ goto err_exit0;
+ }
+
+ eth_dev->rx_queue[eth_dev->num_queues].id = (uint8_t)queue_id;
+ eth_dev->rx_queue[eth_dev->num_queues].hw_id =
+ ndev->be.iface->alloc_rx_queue(ndev->be.be_dev,
+ eth_dev->rx_queue[eth_dev->num_queues].id);
+
+ if (eth_dev->rx_queue[eth_dev->num_queues].hw_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: could not allocate a new queue");
+ goto err_exit0;
+ }
+
+ if (queue_ids)
+ queue_ids[eth_dev->num_queues] = eth_dev->rx_queue[eth_dev->num_queues];
+#endif
+
+ if (i == 0 && (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE && exception_path)) {
+ /*
+ * Init QSL UNM - unmatched - redirects otherwise discarded
+ * packets in QSL
+ */
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_DEST_QUEUE, eth_dev->port,
+ eth_dev->rx_queue[0].hw_id) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1) < 0)
+ goto err_exit0;
+ }
+
+ eth_dev->num_queues++;
+ }
+
+ eth_dev->rss_target_id = -1;
+
+ *rss_target_id = eth_dev->rss_target_id;
+
+ nic_insert_eth_port_dev(ndev, eth_dev);
+
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+ return eth_dev;
+
+err_exit0:
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+
+err_exit1:
+ if (eth_dev)
+ free(eth_dev);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ NT_LOG(DBG, FILTER, "ERR in %s", __func__);
+ return NULL; /* Error exit */
+}
+
struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_backend_ops *be_if,
void *be_dev)
{
@@ -383,6 +600,10 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
+ /*
+ * Device Management API
+ */
+ .flow_get_eth_dev = flow_get_eth_dev,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bff893ec7a..510c0e5d23 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1355,6 +1355,13 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1378,10 +1385,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
uint32_t n_port_mask = -1; /* All ports enabled by default */
uint32_t nb_rx_queues = 1;
uint32_t nb_tx_queues = 1;
+ uint32_t exception_path = 0;
struct flow_queue_id_s queue_ids[MAX_QUEUES];
int n_phy_ports;
struct port_link_speed pls_mbps[NUM_ADAPTER_PORTS_MAX] = { 0 };
int num_port_speeds = 0;
+ enum flow_eth_dev_profile profile = FLOW_ETH_DEV_PROFILE_INLINE;
+
NT_LOG_DBGX(DBG, NTNIC, "Dev %s PF #%i Init : %02x:%02x:%i", pci_dev->name,
pci_dev->addr.function, pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
@@ -1681,6 +1691,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (flow_filter_ops != NULL) {
+ internals->flw_dev = flow_filter_ops->flow_get_eth_dev(0, n_intf_no,
+ eth_dev->data->port_id, nb_rx_queues, queue_ids,
+ &internals->txq_scg[0].rss_target_id, profile, exception_path);
+
+ if (!internals->flw_dev) {
+ NT_LOG(ERR, NTNIC,
+ "Error creating port. Resource exhaustion in HW");
+ return -1;
+ }
+ }
+
/* connect structs */
internals->p_drv = p_drv;
eth_dev->data->dev_private = internals;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index a03c97801b..ac8afdef6a 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,6 +118,11 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+const struct profile_inline_ops *get_profile_inline_ops(void)
+{
+ return NULL;
+}
+
static const struct flow_filter_ops *flow_filter_ops;
void register_flow_filter_ops(const struct flow_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 5b97b3d8ac..017d15d7bc 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include "flow_api.h"
+#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
#include "nthw_platform_drv.h"
#include "nthw_drv.h"
@@ -223,10 +224,23 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+const struct profile_inline_ops *get_profile_inline_ops(void);
+
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
int adapter_no);
int (*flow_filter_done)(struct flow_nic_dev *dev);
+ /*
+ * Device Management API
+ */
+ struct flow_eth_dev *(*flow_get_eth_dev)(uint8_t adapter_no,
+ uint8_t hw_port_no,
+ uint32_t port_id,
+ int alloc_rx_queues,
+ struct flow_queue_id_s queue_ids[],
+ int *rss_target_id,
+ enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path);
};
void register_flow_filter_ops(const struct flow_filter_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 02/73] net/ntnic: add flow filter API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
` (70 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Enable flow ops getter
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 13 +++++++
.../ntnic/include/stream_binary_flow_api.h | 2 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 7 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 37 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +++
7 files changed, 80 insertions(+)
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
new file mode 100644
index 0000000000..802e6dcbe1
--- /dev/null
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -0,0 +1,13 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __CREATE_ELEMENTS_H__
+#define __CREATE_ELEMENTS_H__
+
+
+#include "stream_binary_flow_api.h"
+#include <rte_flow.h>
+
+#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 47e5353344..a6244d4082 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,8 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include "rte_flow.h"
+#include "rte_flow_driver.h"
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 3d9566a52e..d272c73c62 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -79,6 +79,7 @@ sources = files(
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
'ntlog/ntlog.c',
+ 'ntnic_filter/ntnic_filter.c',
'ntutil/nt_util.c',
'ntnic_mod_reg.c',
'ntnic_vfio.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 510c0e5d23..a509a8eb51 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1321,6 +1321,12 @@ eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size
}
}
+static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_ops **ops)
+{
+ *ops = get_dev_flow_ops();
+ return 0;
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1349,6 +1355,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
};
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
new file mode 100644
index 0000000000..445139abc9
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -0,0 +1,37 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_flow_driver.h>
+#include "ntnic_mod_reg.h"
+
+static int
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ int res = 0;
+
+ return res;
+}
+
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ const struct rte_flow_item items[] __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct rte_flow *flow = NULL;
+
+ return flow;
+}
+
+static const struct rte_flow_ops dev_flow_ops = {
+ .create = eth_flow_create,
+ .destroy = eth_flow_destroy,
+};
+
+void dev_flow_init(void)
+{
+ register_dev_flow_ops(&dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ac8afdef6a..ad2266116f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -137,3 +137,18 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+
+static const struct rte_flow_ops *dev_flow_ops;
+
+void register_dev_flow_ops(const struct rte_flow_ops *ops)
+{
+ dev_flow_ops = ops;
+}
+
+const struct rte_flow_ops *get_dev_flow_ops(void)
+{
+ if (dev_flow_ops == NULL)
+ dev_flow_init();
+
+ return dev_flow_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 017d15d7bc..457dc58794 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -15,6 +15,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nthw_fpga_rst_nt200a0x.h"
#include "ntnic_virt_queue.h"
+#include "create_elements.h"
/* sg ops section */
struct sg_ops_s {
@@ -243,6 +244,10 @@ struct flow_filter_ops {
uint32_t exception_path);
};
+void register_dev_flow_ops(const struct rte_flow_ops *ops);
+const struct rte_flow_ops *get_dev_flow_ops(void);
+void dev_flow_init(void);
+
void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 03/73] net/ntnic: add minimal create/destroy flow operations
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 02/73] net/ntnic: add flow filter API Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
` (69 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add high level API with describes base create/destroy implementation
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 51 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 227 +++++++++++++++++-
drivers/net/ntnic/ntutil/nt_util.h | 3 +
3 files changed, 274 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 802e6dcbe1..179542d2b2 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -6,8 +6,59 @@
#ifndef __CREATE_ELEMENTS_H__
#define __CREATE_ELEMENTS_H__
+#include "stdint.h"
#include "stream_binary_flow_api.h"
#include <rte_flow.h>
+#define MAX_ELEMENTS 64
+#define MAX_ACTIONS 32
+
+struct cnv_match_s {
+ struct rte_flow_item rte_flow_item[MAX_ELEMENTS];
+};
+
+struct cnv_attr_s {
+ struct cnv_match_s match;
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
+};
+
+struct cnv_action_s {
+ struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_queue queue;
+};
+
+/*
+ * Only needed because it eases the use of statistics through NTAPI
+ * for faster integration into NTAPI version of driver
+ * Therefore, this is only a good idea when running on a temporary NTAPI
+ * The query() functionality must go to flow engine, when moved to Open Source driver
+ */
+
+struct rte_flow {
+ void *flw_hdl;
+ int used;
+
+ uint32_t flow_stat_id;
+
+ uint16_t caller_id;
+};
+
+enum nt_rte_flow_item_type {
+ NT_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
+ NT_RTE_FLOW_ITEM_TYPE_TUNNEL,
+};
+
+extern rte_spinlock_t flow_lock;
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem);
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset);
+
#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 445139abc9..74cf360da0 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,24 +4,237 @@
*/
#include <rte_flow_driver.h>
+#include "nt_util.h"
+#include "create_elements.h"
#include "ntnic_mod_reg.h"
+#include "ntos_system.h"
+
+#define MAX_RTE_FLOWS 8192
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
+{
+ if (error) {
+ error->cause = NULL;
+ error->message = rte_flow_error->message;
+
+ if (rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE ||
+ rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE)
+ error->type = RTE_FLOW_ERROR_TYPE_NONE;
+
+ else
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+
+ return 0;
+}
+
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr)
+{
+ memset(&attribute->attr, 0x0, sizeof(struct rte_flow_attr));
+
+ if (attr) {
+ attribute->attr.group = attr->group;
+ attribute->attr.priority = attr->priority;
+ }
+
+ return 0;
+}
+
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem)
+{
+ int eidx = 0;
+ int iter_idx = 0;
+ int type = -1;
+
+ if (!items) {
+ NT_LOG(ERR, FILTER, "ERROR no items to iterate!");
+ return -1;
+ }
+
+ do {
+ type = items[iter_idx].type;
+
+ if (type < 0) {
+ if ((int)items[iter_idx].type == NT_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ type = NT_RTE_FLOW_ITEM_TYPE_TUNNEL;
+
+ } else {
+ NT_LOG(ERR, FILTER, "ERROR unknown item type received!");
+ return -1;
+ }
+ }
+
+ if (type >= 0) {
+ if (items[iter_idx].last) {
+ /* Ranges are not supported yet */
+ NT_LOG(ERR, FILTER, "ERROR ITEM-RANGE SETUP - NOT SUPPORTED!");
+ return -1;
+ }
+
+ if (eidx == max_elem) {
+ NT_LOG(ERR, FILTER, "ERROR TOO MANY ELEMENTS ENCOUNTERED!");
+ return -1;
+ }
+
+ match->rte_flow_item[eidx].type = type;
+ match->rte_flow_item[eidx].spec = items[iter_idx].spec;
+ match->rte_flow_item[eidx].mask = items[iter_idx].mask;
+
+ eidx++;
+ iter_idx++;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
+ return (type >= 0) ? 0 : -1;
+}
+
+int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ int max_elem __rte_unused,
+ uint32_t queue_offset __rte_unused)
+{
+ int type = -1;
+
+ return (type >= 0) ? 0 : -1;
+}
+
+static inline uint16_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + port + 1;
+}
+
+static int convert_flow(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct cnv_attr_s *attribute,
+ struct cnv_match_s *match,
+ struct cnv_action_s *action,
+ struct rte_flow_error *error)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t queue_offset = 0;
+
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!internals) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Missing eth_dev");
+ return -1;
+ }
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = internals->vpq[0].id;
+ }
+
+ if (create_attr(attribute, attr) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, "Error in attr");
+ return -1;
+ }
+
+ if (create_match_elements(match, items, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in items");
+ return -1;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ if (create_action_elements_inline(action, actions,
+ MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return -1;
+ }
+
+ return 0;
+}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
- struct rte_flow_error *error __rte_unused)
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!flow)
+ return 0;
return res;
}
-static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_attr *attr __rte_unused,
- const struct rte_flow_item items[] __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ struct cnv_attr_s attribute = { 0 };
+ struct cnv_match_s match = { 0 };
+ struct cnv_action_s action = { 0 };
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t flow_stat_id = 0;
+
+ if (convert_flow(eth_dev, attr, items, actions, &attribute, &match, &action, error) < 0)
+ return NULL;
+
+ /* Main application caller_id is port_id shifted above VF ports */
+ attribute.caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ convert_error(error, &flow_error);
+ return (struct rte_flow *)NULL;
+ }
+
struct rte_flow *flow = NULL;
+ rte_spinlock_lock(&flow_lock);
+ int i;
+
+ for (i = 0; i < MAX_RTE_FLOWS; i++) {
+ if (!nt_flows[i].used) {
+ nt_flows[i].flow_stat_id = flow_stat_id;
+
+ if (nt_flows[i].flow_stat_id < NT_MAX_COLOR_FLOW_STATS) {
+ nt_flows[i].used = 1;
+ flow = &nt_flows[i];
+ }
+
+ break;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
return flow;
}
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 64947f5fbf..71ecd6c68c 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -9,6 +9,9 @@
#include <stdint.h>
#include "nt4ga_link.h"
+/* Total max VDPA ports */
+#define MAX_VDPA_PORTS 128UL
+
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 04/73] net/ntnic: add internal flow create/destroy API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (2 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
` (68 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
NT-specific flow filter API for creating/destroying a flow
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 ++++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++++
3 files changed, 116 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index f49aca79c1..d779dc481f 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -117,6 +117,40 @@ static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
return ndev;
}
+/*
+ * Flow API
+ */
+
+static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ uint16_t forced_vlan_vid __rte_unused,
+ uint16_t caller_id __rte_unused,
+ const struct rte_flow_item item[] __rte_unused,
+ const struct rte_flow_action action[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return NULL;
+ }
+
+ return NULL;
+}
+
+static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
+ struct flow_handle *flow __rte_unused, struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return -1;
+}
/*
* Device Management API
@@ -604,6 +638,11 @@ static const struct flow_filter_ops ops = {
* Device Management API
*/
.flow_get_eth_dev = flow_get_eth_dev,
+ /*
+ * NT Flow API
+ */
+ .flow_create = flow_create,
+ .flow_destroy = flow_destroy,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 74cf360da0..b9d723c9dd 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -110,6 +110,13 @@ static inline uint16_t get_caller_id(uint16_t port)
return MAX_VDPA_PORTS + port + 1;
}
+static int is_flow_handle_typecast(struct rte_flow *flow)
+{
+ const void *first_element = &nt_flows[0];
+ const void *last_element = &nt_flows[MAX_RTE_FLOWS - 1];
+ return (void *)flow < first_element || (void *)flow > last_element;
+}
+
static int convert_flow(struct rte_eth_dev *eth_dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -173,9 +180,17 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
- struct rte_flow_error *error)
+eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
@@ -185,6 +200,20 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow
if (!flow)
return 0;
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, (void *)flow, &flow_error);
+ convert_error(error, &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, flow->flw_hdl,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ rte_spinlock_unlock(&flow_lock);
+ }
+
return res;
}
@@ -194,6 +223,13 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -213,8 +249,12 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
attribute.caller_id = get_caller_id(eth_dev->data->port_id);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ void *flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
convert_error(error, &flow_error);
- return (struct rte_flow *)NULL;
+ return (struct rte_flow *)flw_hdl;
}
struct rte_flow *flow = NULL;
@@ -236,6 +276,26 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
rte_spinlock_unlock(&flow_lock);
+ if (flow) {
+ flow->flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ if (!flow->flw_hdl) {
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ flow = NULL;
+ rte_spinlock_unlock(&flow_lock);
+
+ } else {
+ rte_spinlock_lock(&flow_lock);
+ flow->caller_id = attribute.caller_id;
+ rte_spinlock_unlock(&flow_lock);
+ }
+ }
+
return flow;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 457dc58794..ec8c1612d1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -242,6 +242,20 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ /*
+ * NT Flow API
+ */
+ struct flow_handle *(*flow_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 05/73] net/ntnic: add minimal NT flow inline profile
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (3 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
` (67 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The flow profile implements a all flow related operations
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 15 +++++
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
.../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
7 files changed, 174 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index c80906ec50..3bdfdd4f94 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -74,6 +74,21 @@ struct flow_nic_dev {
struct flow_nic_dev *next;
};
+enum flow_nic_err_msg_e {
+ ERR_SUCCESS = 0,
+ ERR_FAILED = 1,
+ ERR_OUTPUT_TOO_MANY = 3,
+ ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_ACTION_UNSUPPORTED = 28,
+ ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_OUTPUT_INVALID = 33,
+ ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_MSG_NO_MSG
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
+
/*
* Resources
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d272c73c62..f5605e81cb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d779dc481f..d0dad8e8f8 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Error handling
+ */
+
+static const struct {
+ const char *message;
+} err_msg[] = {
+ /* 00 */ { "Operation successfully completed" },
+ /* 01 */ { "Operation failed" },
+ /* 29 */ { "Removing flow failed" },
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
+{
+ assert(msg < ERR_MSG_NO_MSG);
+
+ if (error) {
+ error->message = err_msg[msg].message;
+ error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+}
+
/*
* Resources
*/
@@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
return NULL;
}
- return NULL;
+ return profile_inline_ops->flow_create_profile_inline(dev, attr,
+ forced_vlan_vid, caller_id, item, action, error);
}
static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
@@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return -1;
}
- return -1;
+ return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
new file mode 100644
index 0000000000..a6293f5f82
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -0,0 +1,65 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "flow_api_profile_inline.h"
+#include "ntnic_mod_reg.h"
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ return NULL;
+}
+
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(fh);
+
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ return err;
+}
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow) {
+ /* Delete this flow */
+ pthread_mutex_lock(&dev->ndev->mtx);
+ err = flow_destroy_locked_profile_inline(dev, flow, error);
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ }
+
+ return err;
+}
+
+static const struct profile_inline_ops ops = {
+ /*
+ * Flow functionality
+ */
+ .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
+ .flow_create_profile_inline = flow_create_profile_inline,
+ .flow_destroy_profile_inline = flow_destroy_profile_inline,
+};
+
+void profile_inline_init(void)
+{
+ register_profile_inline_ops(&ops);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
new file mode 100644
index 0000000000..a83cc299b4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_H_
+#define _FLOW_API_PROFILE_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+#include "stream_binary_flow_api.h"
+
+/*
+ * Flow functionality
+ */
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+
+#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ad2266116f..593b56bf5b 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+static const struct profile_inline_ops *profile_inline_ops;
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops)
+{
+ profile_inline_ops = ops;
+}
+
const struct profile_inline_ops *get_profile_inline_ops(void)
{
- return NULL;
+ if (profile_inline_ops == NULL)
+ profile_inline_init();
+
+ return profile_inline_ops;
}
static const struct flow_filter_ops *flow_filter_ops;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index ec8c1612d1..d133336fad 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+struct profile_inline_ops {
+ /*
+ * Flow functionality
+ */
+ int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+ struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+};
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops);
const struct profile_inline_ops *get_profile_inline_ops(void);
+void profile_inline_init(void);
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 06/73] net/ntnic: add management API for NT flow profile
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (4 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
` (66 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Management API implements (re)setting of the NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 ++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 60 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 20 +++++++
.../profile_inline/flow_api_profile_inline.h | 8 +++
drivers/net/ntnic/ntnic_mod_reg.h | 8 +++
6 files changed, 102 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 3bdfdd4f94..790b2f6b03 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -55,6 +55,7 @@ struct flow_nic_dev {
uint16_t ports; /* number of in-ports addressable on this NIC */
/* flow profile this NIC is initially prepared for */
enum flow_eth_dev_profile flow_profile;
+ int flow_mgnt_prepared;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index d025677e25..52ff3cb865 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -46,6 +46,11 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+struct flow_handle {
+ struct flow_eth_dev *dev;
+ struct flow_handle *next;
+};
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d0dad8e8f8..6800a8d834 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -10,6 +10,8 @@
#include "flow_api.h"
#include "flow_filter.h"
+#define SCATTER_GATHER
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -210,10 +212,29 @@ static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_de
static void flow_ndev_reset(struct flow_nic_dev *ndev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return;
+ }
+
/* Delete all eth-port devices created on this NIC device */
while (ndev->eth_base)
flow_delete_eth_dev(ndev->eth_base);
+ /* Error check */
+ while (ndev->flow_base) {
+ NT_LOG(ERR, FILTER,
+ "ERROR : Flows still defined but all eth-ports deleted. Flow %p",
+ ndev->flow_base);
+
+ profile_inline_ops->flow_destroy_profile_inline(ndev->flow_base->dev,
+ ndev->flow_base, NULL);
+ }
+
+ profile_inline_ops->done_flow_management_of_ndev_profile_inline(ndev);
+
km_free_ndev_resource_management(&ndev->km_res_handle);
kcc_free_ndev_resource_management(&ndev->kcc_res_handle);
@@ -255,6 +276,13 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
struct flow_nic_dev *ndev = eth_dev->ndev;
if (!ndev) {
@@ -271,6 +299,20 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
/* delete all created flows from this device */
pthread_mutex_lock(&ndev->mtx);
+ struct flow_handle *flow = ndev->flow_base;
+
+ while (flow) {
+ if (flow->dev == eth_dev) {
+ struct flow_handle *flow_next = flow->next;
+ profile_inline_ops->flow_destroy_locked_profile_inline(eth_dev, flow,
+ NULL);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
/*
* remove unmatched queue if setup in QSL
* remove exception queue setting in QSL UNM
@@ -445,6 +487,24 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->port = port_no;
eth_dev->port_id = port_id;
+ /* First time then NIC is initialized */
+ if (!ndev->flow_mgnt_prepared) {
+ ndev->flow_profile = flow_profile;
+
+ /* Initialize modules if needed - recipe 0 is used as no-match and must be setup */
+ if (profile_inline_ops != NULL &&
+ profile_inline_ops->initialize_flow_management_of_ndev_profile_inline(ndev))
+ goto err_exit0;
+
+ } else {
+ /* check if same flow type is requested, otherwise fail */
+ if (ndev->flow_profile != flow_profile) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: Different flow types requested on same NIC device. Not supported.");
+ goto err_exit0;
+ }
+ }
+
/* Allocate the requested queues in HW for this dev */
for (i = 0; i < alloc_rx_queues; i++) {
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a6293f5f82..c9e4008b7e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,20 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+/*
+ * Public functions
+ */
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return -1;
+}
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return 0;
+}
+
struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid,
@@ -51,6 +65,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
}
static const struct profile_inline_ops ops = {
+ /*
+ * Management
+ */
+ .done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
+ .initialize_flow_management_of_ndev_profile_inline =
+ initialize_flow_management_of_ndev_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index a83cc299b4..b87f8542ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,14 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+/*
+ * Management
+ */
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index d133336fad..149c549112 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -226,6 +226,14 @@ const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
struct profile_inline_ops {
+ /*
+ * Management
+ */
+
+ int (*done_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
+ int (*initialize_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 07/73] net/ntnic: add NT flow profile management implementation
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (5 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
` (65 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Implements functions required for (re)set NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 4 ++
drivers/net/ntnic/include/flow_api_engine.h | 10 ++++
drivers/net/ntnic/meson.build | 4 ++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 55 +++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 52 ++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 19 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 59 +++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 23 ++++++++
.../profile_inline/flow_api_profile_inline.c | 52 ++++++++++++++++
9 files changed, 278 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 790b2f6b03..748da89262 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -61,6 +61,10 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *group_handle;
+ void *hw_db_handle;
+ void *id_table_handle;
+
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 52ff3cb865..2497c31a08 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -6,6 +6,8 @@
#ifndef _FLOW_API_ENGINE_H_
#define _FLOW_API_ENGINE_H_
+#include <stdint.h>
+
/*
* Resource management
*/
@@ -46,6 +48,9 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_CPY_WRITERS_SUPPORTED 8
+
+
struct flow_handle {
struct flow_eth_dev *dev;
struct flow_handle *next;
@@ -55,4 +60,9 @@ void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
+/*
+ * Group management
+ */
+int flow_group_handle_create(void **handle, uint32_t group_count);
+int flow_group_handle_destroy(void **handle);
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f5605e81cb..f7292144ac 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -18,6 +18,7 @@ includes = [
include_directories('nthw/supported'),
include_directories('nthw/model'),
include_directories('nthw/flow_filter'),
+ include_directories('nthw/flow_api'),
include_directories('nim/'),
]
@@ -47,7 +48,10 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/flow_group.c',
+ 'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
+ 'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
new file mode 100644
index 0000000000..a7371f3aad
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "flow_api_engine.h"
+
+#define OWNER_ID_COUNT 256
+#define PORT_COUNT 8
+
+struct group_lookup_entry_s {
+ uint64_t ref_counter;
+ uint32_t *reverse_lookup;
+};
+
+struct group_handle_s {
+ uint32_t group_count;
+
+ uint32_t *translation_table;
+
+ struct group_lookup_entry_s *lookup_entries;
+};
+
+int flow_group_handle_create(void **handle, uint32_t group_count)
+{
+ struct group_handle_s *group_handle;
+
+ *handle = calloc(1, sizeof(struct group_handle_s));
+ group_handle = *handle;
+
+ group_handle->group_count = group_count;
+ group_handle->translation_table =
+ calloc((uint32_t)(group_count * PORT_COUNT * OWNER_ID_COUNT), sizeof(uint32_t));
+ group_handle->lookup_entries = calloc(group_count, sizeof(struct group_lookup_entry_s));
+
+ return *handle != NULL ? 0 : -1;
+}
+
+int flow_group_handle_destroy(void **handle)
+{
+ if (*handle) {
+ struct group_handle_s *group_handle = (struct group_handle_s *)*handle;
+
+ free(group_handle->translation_table);
+ free(group_handle->lookup_entries);
+
+ free(*handle);
+ *handle = NULL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
new file mode 100644
index 0000000000..9b46848e59
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "flow_id_table.h"
+
+#define NTNIC_ARRAY_BITS 14
+#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+
+struct ntnic_id_table_element {
+ union flm_handles handle;
+ uint8_t caller_id;
+ uint8_t type;
+};
+
+struct ntnic_id_table_data {
+ struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
+ pthread_mutex_t mtx;
+
+ uint32_t next_id;
+
+ uint32_t free_head;
+ uint32_t free_tail;
+ uint32_t free_count;
+};
+
+void *ntnic_id_table_create(void)
+{
+ struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
+
+ pthread_mutex_init(&handle->mtx, NULL);
+ handle->next_id = 1;
+
+ return handle;
+}
+
+void ntnic_id_table_destroy(void *id_table)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
+ free(handle->arrays[i]);
+
+ pthread_mutex_destroy(&handle->mtx);
+
+ free(id_table);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
new file mode 100644
index 0000000000..13455f1165
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLOW_ID_TABLE_H_
+#define _FLOW_ID_TABLE_H_
+
+#include <stdint.h>
+
+union flm_handles {
+ uint64_t idx;
+ void *p;
+};
+
+void *ntnic_id_table_create(void);
+void ntnic_id_table_destroy(void *id_table);
+
+#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
new file mode 100644
index 0000000000..5fda11183c
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+
+#include "flow_api_hw_db_inline.h"
+
+/******************************************************************************/
+/* Handle */
+/******************************************************************************/
+
+struct hw_db_inline_resource_db {
+ /* Actions */
+ struct hw_db_inline_resource_db_cot {
+ struct hw_db_inline_cot_data data;
+ int ref;
+ } *cot;
+
+ uint32_t nb_cot;
+
+ /* Hardware */
+
+ struct hw_db_inline_resource_db_cfn {
+ uint64_t priority;
+ int cfn_hw;
+ int ref;
+ } *cfn;
+};
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
+{
+ /* Note: calloc is required for functionality in the hw_db_inline_destroy() */
+ struct hw_db_inline_resource_db *db = calloc(1, sizeof(struct hw_db_inline_resource_db));
+
+ if (db == NULL)
+ return -1;
+
+ db->nb_cot = ndev->be.cat.nb_cat_funcs;
+ db->cot = calloc(db->nb_cot, sizeof(struct hw_db_inline_resource_db_cot));
+
+ if (db->cot == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ *db_handle = db;
+ return 0;
+}
+
+void hw_db_inline_destroy(void *db_handle)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ free(db->cot);
+
+ free(db->cfn);
+
+ free(db);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
new file mode 100644
index 0000000000..23caf73cf3
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_HW_DB_INLINE_H_
+#define _FLOW_API_HW_DB_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+
+struct hw_db_inline_cot_data {
+ uint32_t matcher_color_contrib : 4;
+ uint32_t frag_rcp : 4;
+ uint32_t padding : 24;
+};
+
+/**/
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
+void hw_db_inline_destroy(void *db_handle);
+
+#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c9e4008b7e..986196b408 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,6 +4,9 @@
*/
#include "ntlog.h"
+#include "flow_api_engine.h"
+#include "flow_api_hw_db_inline.h"
+#include "flow_id_table.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
@@ -14,11 +17,60 @@
int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+ if (!ndev->flow_mgnt_prepared) {
+ /* Check static arrays are big enough */
+ assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+
+ ndev->id_table_handle = ntnic_id_table_create();
+
+ if (ndev->id_table_handle == NULL)
+ goto err_exit0;
+
+ if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
+ goto err_exit0;
+
+ if (hw_db_inline_create(ndev, &ndev->hw_db_handle))
+ goto err_exit0;
+
+ ndev->flow_mgnt_prepared = 1;
+ }
+
+ return 0;
+
+err_exit0:
+ done_flow_management_of_ndev_profile_inline(ndev);
return -1;
}
int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ if (ndev->flow_mgnt_prepared) {
+ flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+
+ flow_group_handle_destroy(&ndev->group_handle);
+ ntnic_id_table_destroy(ndev->id_table_handle);
+
+ flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+
+ hw_mod_tpe_reset(&ndev->be);
+ flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
+ flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+
+ hw_db_inline_destroy(ndev->hw_db_handle);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ ndev->flow_mgnt_prepared = 0;
+ }
+
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 08/73] net/ntnic: add create/destroy implementation for NT flows
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (6 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
` (64 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Implements flow create/destroy functions with minimal capabilities
item any
action port id
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 6 +
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 105 +++
.../ntnic/include/stream_binary_flow_api.h | 4 +
drivers/net/ntnic/meson.build | 2 +
drivers/net/ntnic/nthw/flow_api/flow_group.c | 44 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 79 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 4 +
.../flow_api/profile_inline/flm_lrn_queue.c | 28 +
.../flow_api/profile_inline/flm_lrn_queue.h | 14 +
.../profile_inline/flow_api_hw_db_inline.c | 93 +++
.../profile_inline/flow_api_hw_db_inline.h | 64 ++
.../profile_inline/flow_api_profile_inline.c | 657 ++++++++++++++++++
13 files changed, 1103 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b9b87bdfe..1c653fd5a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,3 +12,9 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
Linux = Y
x86-64 = Y
+
+[rte_flow items]
+any = Y
+
+[rte_flow actions]
+port_id = Y
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 748da89262..667dad6d5f 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -68,6 +68,9 @@ struct flow_nic_dev {
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
+ /* linked list of all FLM flows created on this NIC */
+ struct flow_handle *flow_base_flm;
+ pthread_mutex_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 2497c31a08..b8da5eafba 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -7,6 +7,10 @@
#define _FLOW_API_ENGINE_H_
#include <stdint.h>
+#include <stdatomic.h>
+
+#include "hw_mod_backend.h"
+#include "stream_binary_flow_api.h"
/*
* Resource management
@@ -50,10 +54,107 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+enum flow_port_type_e {
+ PORT_NONE, /* not defined or drop */
+ PORT_INTERNAL, /* no queues attached */
+ PORT_PHY, /* MAC phy output queue */
+ PORT_VIRT, /* Memory queues to Host */
+};
+
+struct output_s {
+ uint32_t owning_port_id;/* the port who owns this output destination */
+ enum flow_port_type_e type;
+ int id; /* depending on port type: queue ID or physical port id or not used */
+ int active; /* activated */
+};
+
+struct nic_flow_def {
+ /*
+ * Frame Decoder match info collected
+ */
+ int l2_prot;
+ int l3_prot;
+ int l4_prot;
+ int tunnel_prot;
+ int tunnel_l3_prot;
+ int tunnel_l4_prot;
+ int vlans;
+ int fragmentation;
+ int ip_prot;
+ int tunnel_ip_prot;
+ /*
+ * Additional meta data for various functions
+ */
+ int in_port_override;
+ int non_empty; /* default value is -1; value 1 means flow actions update */
+ struct output_s dst_id[MAX_OUTPUT_DEST];/* define the output to use */
+ /* total number of available queues defined for all outputs - i.e. number of dst_id's */
+ int dst_num_avail;
+
+ /*
+ * Mark or Action info collection
+ */
+ uint32_t mark;
+
+ uint32_t jump_to_group;
+
+ int full_offload;
+};
+
+enum flow_handle_type {
+ FLOW_HANDLE_TYPE_FLOW,
+ FLOW_HANDLE_TYPE_FLM,
+};
struct flow_handle {
+ enum flow_handle_type type;
+ uint32_t flm_id;
+ uint16_t caller_id;
+ uint16_t learn_ignored;
+
struct flow_eth_dev *dev;
struct flow_handle *next;
+ struct flow_handle *prev;
+
+ void *user_data;
+
+ union {
+ struct {
+ /*
+ * 1st step conversion and validation of flow
+ * verified and converted flow match + actions structure
+ */
+ struct nic_flow_def *fd;
+ /*
+ * 2nd step NIC HW resource allocation and configuration
+ * NIC resource management structures
+ */
+ struct {
+ uint32_t db_idx_counter;
+ uint32_t db_idxs[RES_COUNT];
+ };
+ uint32_t port_id; /* MAC port ID or override of virtual in_port */
+ };
+
+ struct {
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_data[10];
+ uint8_t flm_prot;
+ uint8_t flm_kid;
+ uint8_t flm_prio;
+ uint8_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint32_t flm_nat_ipv4;
+ uint16_t flm_nat_port;
+ uint8_t flm_dscp;
+ uint32_t flm_teid;
+ uint8_t flm_rqi;
+ uint8_t flm_qfi;
+ };
+ };
};
void km_free_ndev_resource_management(void **handle);
@@ -65,4 +166,8 @@ void kcc_free_ndev_resource_management(void **handle);
*/
int flow_group_handle_create(void **handle, uint32_t group_count);
int flow_group_handle_destroy(void **handle);
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out);
+
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index a6244d4082..d878b848c2 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -8,6 +8,10 @@
#include "rte_flow.h"
#include "rte_flow_driver.h"
+
+/* Max RSS hash key length in bytes */
+#define MAX_RSS_KEY_LEN 40
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f7292144ac..e1fef37ccb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -50,6 +50,8 @@ sources = files(
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
+ 'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
index a7371f3aad..f76986b178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_group.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -53,3 +53,47 @@ int flow_group_handle_destroy(void **handle)
return 0;
}
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out)
+{
+ struct group_handle_s *group_handle = (struct group_handle_s *)handle;
+ uint32_t *table_ptr;
+ uint32_t lookup;
+
+ if (group_handle == NULL || group_in >= group_handle->group_count || port_id >= PORT_COUNT)
+ return -1;
+
+ /* Don't translate group 0 */
+ if (group_in == 0) {
+ *group_out = 0;
+ return 0;
+ }
+
+ table_ptr = &group_handle->translation_table[port_id * OWNER_ID_COUNT * PORT_COUNT +
+ owner_id * OWNER_ID_COUNT + group_in];
+ lookup = *table_ptr;
+
+ if (lookup == 0) {
+ for (lookup = 1; lookup < group_handle->group_count &&
+ group_handle->lookup_entries[lookup].ref_counter > 0;
+ ++lookup)
+ ;
+
+ if (lookup < group_handle->group_count) {
+ group_handle->lookup_entries[lookup].reverse_lookup = table_ptr;
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+
+ *table_ptr = lookup;
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+ }
+
+ *group_out = lookup;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 9b46848e59..5635ac4524 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -4,6 +4,7 @@
*/
#include <pthread.h>
+#include <stdint.h>
#include <stdlib.h>
#include <string.h>
@@ -11,6 +12,10 @@
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+#define NTNIC_ARRAY_MASK (NTNIC_ARRAY_SIZE - 1)
+#define NTNIC_MAX_ID (NTNIC_ARRAY_SIZE * NTNIC_ARRAY_SIZE)
+#define NTNIC_MAX_ID_MASK (NTNIC_MAX_ID - 1)
+#define NTNIC_MIN_FREE 1000
struct ntnic_id_table_element {
union flm_handles handle;
@@ -29,6 +34,36 @@ struct ntnic_id_table_data {
uint32_t free_count;
};
+static inline struct ntnic_id_table_element *
+ntnic_id_table_array_find_element(struct ntnic_id_table_data *handle, uint32_t id)
+{
+ uint32_t idx_d1 = id & NTNIC_ARRAY_MASK;
+ uint32_t idx_d2 = (id >> NTNIC_ARRAY_BITS) & NTNIC_ARRAY_MASK;
+
+ if (handle->arrays[idx_d2] == NULL) {
+ handle->arrays[idx_d2] =
+ calloc(NTNIC_ARRAY_SIZE, sizeof(struct ntnic_id_table_element));
+ }
+
+ return &handle->arrays[idx_d2][idx_d1];
+}
+
+static inline uint32_t ntnic_id_table_array_pop_free_id(struct ntnic_id_table_data *handle)
+{
+ uint32_t id = 0;
+
+ if (handle->free_count > NTNIC_MIN_FREE) {
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_tail);
+ id = handle->free_tail;
+
+ handle->free_tail = element->handle.idx & NTNIC_MAX_ID_MASK;
+ handle->free_count -= 1;
+ }
+
+ return id;
+}
+
void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
@@ -50,3 +85,47 @@ void ntnic_id_table_destroy(void *id_table)
free(id_table);
}
+
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
+
+ if (new_id == 0)
+ new_id = handle->next_id++;
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, new_id);
+ element->caller_id = caller_id;
+ element->type = type;
+ memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+
+ return new_id;
+}
+
+void ntnic_id_table_free_id(void *id_table, uint32_t id)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *current_element =
+ ntnic_id_table_array_find_element(handle, id);
+ memset(current_element, 0, sizeof(struct ntnic_id_table_element));
+
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_head);
+ element->handle.idx = id;
+ handle->free_head = id;
+ handle->free_count += 1;
+
+ if (handle->free_tail == 0)
+ handle->free_tail = handle->free_head;
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index 13455f1165..e190fe4a11 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -16,4 +16,8 @@ union flm_handles {
void *ntnic_id_table_create(void);
void ntnic_id_table_destroy(void *id_table);
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type);
+void ntnic_id_table_free_id(void *id_table, uint32_t id);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
new file mode 100644
index 0000000000..ad7efafe08
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+
+#include "hw_mod_flm_v25.h"
+
+#include "flm_lrn_queue.h"
+
+#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ unsigned int n = rte_ring_enqueue_zc_burst_elem_start(q, ELEM_SIZE, 1, &zcd, NULL);
+ return (n == 0) ? NULL : zcd.ptr1;
+}
+
+void flm_lrn_queue_release_write_buffer(void *q)
+{
+ rte_ring_enqueue_zc_elem_finish(q, 1);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
new file mode 100644
index 0000000000..8cee0c8e78
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_LRN_QUEUE_H_
+#define _FLM_LRN_QUEUE_H_
+
+#include <stdint.h>
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q);
+void flm_lrn_queue_release_write_buffer(void *q);
+
+#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5fda11183c..4ea9387c80 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -3,7 +3,11 @@
*/
+#include "hw_mod_backend.h"
+#include "flow_api_engine.h"
+
#include "flow_api_hw_db_inline.h"
+#include "rte_common.h"
/******************************************************************************/
/* Handle */
@@ -57,3 +61,92 @@ void hw_db_inline_destroy(void *db_handle)
free(db);
}
+
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size)
+{
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_COT:
+ hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/******************************************************************************/
+/* COT */
+/******************************************************************************/
+
+static int hw_db_inline_cot_compare(const struct hw_db_inline_cot_data *data1,
+ const struct hw_db_inline_cot_data *data2)
+{
+ return data1->matcher_color_contrib == data2->matcher_color_contrib &&
+ data1->frag_rcp == data2->frag_rcp;
+}
+
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cot_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_COT;
+
+ for (uint32_t i = 1; i < db->nb_cot; ++i) {
+ int ref = db->cot[i].ref;
+
+ if (ref > 0 && hw_db_inline_cot_compare(data, &db->cot[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cot_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cot[idx.ids].ref = 1;
+ memcpy(&db->cot[idx.ids].data, data, sizeof(struct hw_db_inline_cot_data));
+
+ return idx;
+}
+
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cot[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cot[idx.ids].ref -= 1;
+
+ if (db->cot[idx.ids].ref <= 0) {
+ memset(&db->cot[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cot_data));
+ db->cot[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 23caf73cf3..0116af015d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -9,15 +9,79 @@
#include "flow_api.h"
+#define HW_DB_INLINE_MAX_QST_PER_QSL 128
+#define HW_DB_INLINE_MAX_ENCAP_SIZE 128
+
+#define HW_DB_IDX \
+ union { \
+ struct { \
+ uint32_t id1 : 8; \
+ uint32_t id2 : 8; \
+ uint32_t id3 : 8; \
+ uint32_t type : 7; \
+ uint32_t error : 1; \
+ }; \
+ struct { \
+ uint32_t ids : 24; \
+ }; \
+ uint32_t raw; \
+ }
+
+/* Strongly typed int types */
+struct hw_db_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_cot_idx {
+ HW_DB_IDX;
+};
+
+enum hw_db_idx_type {
+ HW_DB_IDX_TYPE_NONE = 0,
+ HW_DB_IDX_TYPE_COT,
+};
+
+/* Functionality data types */
+struct hw_db_inline_qsl_data {
+ uint32_t discard : 1;
+ uint32_t drop : 1;
+ uint32_t table_size : 7;
+ uint32_t retransmit : 1;
+ uint32_t padding : 22;
+
+ struct {
+ uint16_t queue : 7;
+ uint16_t queue_en : 1;
+ uint16_t tx_port : 3;
+ uint16_t tx_port_en : 1;
+ uint16_t padding : 4;
+ } table[HW_DB_INLINE_MAX_QST_PER_QSL];
+};
+
struct hw_db_inline_cot_data {
uint32_t matcher_color_contrib : 4;
uint32_t frag_rcp : 4;
uint32_t padding : 24;
};
+struct hw_db_inline_hsh_data {
+ uint32_t func;
+ uint64_t hash_mask;
+ uint8_t key[MAX_RSS_KEY_LEN];
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
void hw_db_inline_destroy(void *db_handle);
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size);
+
+/**/
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data);
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 986196b408..7f9869a511 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,12 +4,545 @@
*/
#include "ntlog.h"
+#include "nt_util.h"
+
+#include "hw_mod_backend.h"
+#include "flm_lrn_queue.h"
+#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
#include "flow_id_table.h"
+#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_common.h>
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
+static void *flm_lrn_queue_arr;
+
+struct flm_flow_key_def_s {
+ union {
+ struct {
+ uint64_t qw0_dyn : 7;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 7;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 7;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 7;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_proto : 1;
+ uint64_t inner_proto : 1;
+ uint64_t pad : 2;
+ };
+ uint64_t data;
+ };
+ uint32_t mask[10];
+};
+
+/*
+ * Flow Matcher functionality
+ */
+static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
+{
+ struct flow_eth_dev *dev = ndev->eth_base;
+
+ while (dev) {
+ if (dev->port_id == port_id)
+ return dev->port;
+
+ dev = dev->next;
+ }
+
+ return UINT8_MAX;
+}
+
+static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base)
+ ndev->flow_base->prev = fh;
+
+ fh->next = ndev->flow_base;
+ fh->prev = NULL;
+ ndev->flow_base = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ struct flow_handle *next = fh->next;
+ struct flow_handle *prev = fh->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base == fh) {
+ ndev->flow_base = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base_flm)
+ ndev->flow_base_flm->prev = fh;
+
+ fh->next = ndev->flow_base_flm;
+ fh->prev = NULL;
+ ndev->flow_base_flm = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
+{
+ struct flow_handle *next = fh_flm->next;
+ struct flow_handle *prev = fh_flm->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base_flm = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base_flm == fh_flm) {
+ ndev->flow_base_flm = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
+{
+ if (fd) {
+ fd->full_offload = -1;
+ fd->in_port_override = -1;
+ fd->mark = UINT32_MAX;
+ fd->jump_to_group = UINT32_MAX;
+
+ fd->l2_prot = -1;
+ fd->l3_prot = -1;
+ fd->l4_prot = -1;
+ fd->vlans = 0;
+ fd->tunnel_prot = -1;
+ fd->tunnel_l3_prot = -1;
+ fd->tunnel_l4_prot = -1;
+ fd->fragmentation = -1;
+ fd->ip_prot = -1;
+ fd->tunnel_ip_prot = -1;
+
+ fd->non_empty = -1;
+ }
+
+ return fd;
+}
+
+static inline struct nic_flow_def *allocate_nic_flow_def(void)
+{
+ return prepare_nic_flow_def(calloc(1, sizeof(struct nic_flow_def)));
+}
+
+static bool fd_has_empty_pattern(const struct nic_flow_def *fd)
+{
+ return fd && fd->vlans == 0 && fd->l2_prot < 0 && fd->l3_prot < 0 && fd->l4_prot < 0 &&
+ fd->tunnel_prot < 0 && fd->tunnel_l3_prot < 0 && fd->tunnel_l4_prot < 0 &&
+ fd->ip_prot < 0 && fd->tunnel_ip_prot < 0 && fd->non_empty < 0;
+}
+
+static inline const void *memcpy_mask_if(void *dest, const void *src, const void *mask,
+ size_t count)
+{
+ if (mask == NULL)
+ return src;
+
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+ const unsigned char *mask_ptr = (const unsigned char *)mask;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] = src_ptr[i] & mask_ptr[i];
+
+ return dest;
+}
+
+static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLM)
+ return -1;
+
+ if (flm_op == NT_FLM_OP_LEARN) {
+ union flm_handles flm_h;
+ flm_h.p = fh;
+ fh->flm_id = ntnic_id_table_get_id(fh->dev->ndev->id_table_handle, flm_h,
+ fh->caller_id, 1);
+ }
+
+ uint32_t flm_id = fh->flm_id;
+
+ if (flm_op == NT_FLM_OP_UNLEARN) {
+ ntnic_id_table_free_id(fh->dev->ndev->id_table_handle, flm_id);
+
+ if (fh->learn_ignored == 1)
+ return 0;
+ }
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->id = flm_id;
+
+ learn_record->qw0[0] = fh->flm_data[9];
+ learn_record->qw0[1] = fh->flm_data[8];
+ learn_record->qw0[2] = fh->flm_data[7];
+ learn_record->qw0[3] = fh->flm_data[6];
+ learn_record->qw4[0] = fh->flm_data[5];
+ learn_record->qw4[1] = fh->flm_data[4];
+ learn_record->qw4[2] = fh->flm_data[3];
+ learn_record->qw4[3] = fh->flm_data[2];
+ learn_record->sw8 = fh->flm_data[1];
+ learn_record->sw9 = fh->flm_data[0];
+ learn_record->prot = fh->flm_prot;
+
+ /* Last non-zero mtr is used for statistics */
+ uint8_t mbrs = 0;
+
+ learn_record->vol_idx = mbrs;
+
+ learn_record->nat_ip = fh->flm_nat_ipv4;
+ learn_record->nat_port = fh->flm_nat_port;
+ learn_record->nat_en = fh->flm_nat_ipv4 || fh->flm_nat_port ? 1 : 0;
+
+ learn_record->dscp = fh->flm_dscp;
+ learn_record->teid = fh->flm_teid;
+ learn_record->qfi = fh->flm_qfi;
+ learn_record->rqi = fh->flm_rqi;
+ /* Lower 10 bits used for RPL EXT PTR */
+ learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+
+ learn_record->ent = 0;
+ learn_record->op = flm_op & 0xf;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->prio = fh->flm_prio & 0x3;
+ learn_record->ft = fh->flm_ft;
+ learn_record->kid = fh->flm_kid;
+ learn_record->eor = 1;
+ learn_record->scrub_prof = 0;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+ return 0;
+}
+
+/*
+ * This function must be callable without locking any mutexes
+ */
+static int interpret_flow_actions(const struct flow_eth_dev *dev,
+ const struct rte_flow_action action[],
+ const struct rte_flow_action *action_mask,
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint32_t *num_dest_port,
+ uint32_t *num_queues)
+{
+ unsigned int encap_decap_order = 0;
+
+ *num_dest_port = 0;
+ *num_queues = 0;
+
+ if (action == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow actions missing");
+ return -1;
+ }
+
+ /*
+ * Gather flow match + actions and convert into internal flow definition structure (struct
+ * nic_flow_def_s) This is the 1st step in the flow creation - validate, convert and
+ * prepare
+ */
+ for (int aidx = 0; action[aidx].type != RTE_FLOW_ACTION_TYPE_END; ++aidx) {
+ switch (action[aidx].type) {
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_PORT_ID", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_port_id port_id_tmp;
+ const struct rte_flow_action_port_id *port_id =
+ memcpy_mask_if(&port_id_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_port_id));
+
+ if (*num_dest_port > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple port_id actions for one flow is not supported");
+ flow_nic_set_error(ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED,
+ error);
+ return -1;
+ }
+
+ uint8_t port = get_port_from_port_id(dev->ndev, port_id->id);
+
+ if (fd->dst_num_avail == MAX_OUTPUT_DEST) {
+ NT_LOG(ERR, FILTER, "Too many output destinations");
+ flow_nic_set_error(ERR_OUTPUT_TOO_MANY, error);
+ return -1;
+ }
+
+ if (port >= dev->ndev->be.num_phy_ports) {
+ NT_LOG(ERR, FILTER, "Phy port out of range");
+ flow_nic_set_error(ERR_OUTPUT_INVALID, error);
+ return -1;
+ }
+
+ /* New destination port to add */
+ fd->dst_id[fd->dst_num_avail].owning_port_id = port_id->id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_PHY;
+ fd->dst_id[fd->dst_num_avail].id = (int)port;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ if (fd->full_offload < 0)
+ fd->full_offload = 1;
+
+ *num_dest_port += 1;
+
+ NT_LOG(DBG, FILTER, "Phy port ID: %i", (int)port);
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
+ action[aidx].type);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+ }
+
+ if (!(encap_decap_order == 0 || encap_decap_order == 2)) {
+ NT_LOG(ERR, FILTER, "Invalid encap/decap actions");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int interpret_flow_elements(const struct flow_eth_dev *dev,
+ const struct rte_flow_item elem[],
+ struct nic_flow_def *fd __rte_unused,
+ struct rte_flow_error *error,
+ uint16_t implicit_vlan_vid __rte_unused,
+ uint32_t *in_port_id,
+ uint32_t *packet_data,
+ uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ *in_port_id = UINT32_MAX;
+
+ memset(packet_data, 0x0, sizeof(uint32_t) * 10);
+ memset(packet_mask, 0x0, sizeof(uint32_t) * 10);
+ memset(key_def, 0x0, sizeof(struct flm_flow_key_def_s));
+
+ if (elem == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow items missing");
+ return -1;
+ }
+
+ int qw_reserved_mac = 0;
+ int qw_reserved_ipv6 = 0;
+
+ int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
+
+ if (qw_free < 0) {
+ NT_LOG(ERR, FILTER, "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ANY:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
+ (int)elem[eidx].type);
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
+ uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
+ uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
+ uint32_t priority __rte_unused)
+{
+ struct nic_flow_def *fd;
+ struct flow_handle fh_copy;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLOW)
+ return -1;
+
+ memcpy(&fh_copy, fh, sizeof(struct flow_handle));
+ memset(fh, 0x0, sizeof(struct flow_handle));
+ fd = fh_copy.fd;
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->caller_id = fh_copy.caller_id;
+ fh->dev = fh_copy.dev;
+ fh->next = fh_copy.next;
+ fh->prev = fh_copy.prev;
+ fh->user_data = fh_copy.user_data;
+
+ fh->flm_db_idx_counter = fh_copy.db_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+
+ free(fd);
+
+ return 0;
+}
+
+static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
+ const struct nic_flow_def *fd __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ uint32_t group __rte_unused,
+ uint32_t local_idxs[] __rte_unused,
+ uint32_t *local_idx_counter __rte_unused,
+ uint16_t *flm_rpl_ext_ptr __rte_unused,
+ uint32_t *flm_ft __rte_unused,
+ uint32_t *flm_scrub __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ return 0;
+}
+
+static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct nic_flow_def *fd,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
+ struct rte_flow_error *error, uint32_t port_id,
+ uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ struct flm_flow_key_def_s *key_def __rte_unused)
+{
+ struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLOW;
+ fh->port_id = port_id;
+ fh->dev = dev;
+ fh->fd = fd;
+ fh->caller_id = caller_id;
+
+ struct hw_db_inline_qsl_data qsl_data;
+
+ struct hw_db_inline_hsh_data hsh_data;
+
+ if (attr->group > 0 && fd_has_empty_pattern(fd)) {
+ /*
+ * Default flow for group 1..32
+ */
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, NULL, NULL, NULL, error)) {
+ goto error_out;
+ }
+
+ nic_insert_flow(dev->ndev, fh);
+
+ } else if (attr->group > 0) {
+ /*
+ * Flow for group 1..32
+ */
+
+ /* Setup Actions */
+ uint16_t flm_rpl_ext_ptr = 0;
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, &flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Program flow */
+ convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ flm_scrub, attr->priority & 0x3);
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ } else {
+ /*
+ * Flow for group 0
+ */
+ nic_insert_flow(dev->ndev, fh);
+ }
+
+ return fh;
+
+error_out:
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ } else {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ }
+
+ free(fh);
+
+ return NULL;
+}
/*
* Public functions
@@ -82,6 +615,92 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_action action[],
struct rte_flow_error *error)
{
+ struct flow_handle *fh = NULL;
+ int res;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t num_dest_port;
+ uint32_t num_queues;
+
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct rte_flow_attr attr_local;
+ memcpy(&attr_local, attr, sizeof(struct rte_flow_attr));
+ uint16_t forced_vlan_vid_local = forced_vlan_vid;
+ uint16_t caller_id_local = caller_id;
+
+ if (attr_local.group > 0)
+ forced_vlan_vid_local = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL)
+ goto err_exit;
+
+ res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res)
+ goto err_exit;
+
+ res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
+ packet_data, packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ fd->jump_to_group, &fd->jump_to_group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (attr_local.group > 0 &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ attr_local.group, &attr_local.group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ /* Create and flush filter to NIC */
+ fh = create_flow_filter(dev, fd, &attr_local, forced_vlan_vid_local,
+ caller_id_local, error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ if (!fh)
+ goto err_exit;
+
+ NT_LOG(DBG, FILTER, "New FlOW: fh (flow handle) %p, fd (flow definition) %p", fh, fd);
+ NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
+ dev, dev->ndev->adapter_no, dev->port, fh, fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return fh;
+
+err_exit:
+
+ if (fh)
+ flow_destroy_locked_profile_inline(dev, fh, NULL);
+
+ else
+ free(fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
}
@@ -96,6 +715,44 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
flow_nic_set_error(ERR_SUCCESS, error);
+ /* take flow out of ndev list - may not have been put there yet */
+ if (fh->type == FLOW_HANDLE_TYPE_FLM)
+ nic_remove_flow_flm(dev->ndev, fh);
+
+ else
+ nic_remove_flow(dev->ndev, fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ flm_flow_programming(fh, NT_FLM_OP_UNLEARN);
+
+ } else {
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ free(fh->fd);
+ }
+
+ if (err) {
+ NT_LOG(ERR, FILTER, "FAILED removing flow: %p", fh);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ }
+
+ free(fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
return err;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 09/73] net/ntnic: add infrastructure for for flow actions and items
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (7 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 10/73] net/ntnic: add action queue Serhii Iliushyk
` (63 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add etities(utilities, structures, etc) required for flow API
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/flow_api.h | 34 ++++++++
drivers/net/ntnic/include/flow_api_engine.h | 46 +++++++++++
drivers/net/ntnic/include/hw_mod_backend.h | 33 ++++++++
drivers/net/ntnic/nthw/flow_api/flow_km.c | 81 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 68 +++++++++++++++-
5 files changed, 258 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 667dad6d5f..7f031ccda8 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -85,13 +85,47 @@ struct flow_nic_dev {
enum flow_nic_err_msg_e {
ERR_SUCCESS = 0,
ERR_FAILED = 1,
+ ERR_MEMORY = 2,
ERR_OUTPUT_TOO_MANY = 3,
+ ERR_RSS_TOO_MANY_QUEUES = 4,
+ ERR_VLAN_TYPE_NOT_SUPPORTED = 5,
+ ERR_VXLAN_HEADER_NOT_ACCEPTED = 6,
+ ERR_VXLAN_POP_INVALID_RECIRC_PORT = 7,
+ ERR_VXLAN_POP_FAILED_CREATING_VTEP = 8,
+ ERR_MATCH_VLAN_TOO_MANY = 9,
+ ERR_MATCH_INVALID_IPV6_HDR = 10,
+ ERR_MATCH_TOO_MANY_TUNNEL_PORTS = 11,
ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_FAILED_BY_HW_LIMITS = 13,
ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_MATCH_FAILED_TOO_COMPLEX = 15,
+ ERR_ACTION_REPLICATION_FAILED = 16,
+ ERR_ACTION_OUTPUT_RESOURCE_EXHAUSTION = 17,
+ ERR_ACTION_TUNNEL_HEADER_PUSH_OUTPUT_LIMIT = 18,
+ ERR_ACTION_INLINE_MOD_RESOURCE_EXHAUSTION = 19,
+ ERR_ACTION_RETRANSMIT_RESOURCE_EXHAUSTION = 20,
+ ERR_ACTION_FLOW_COUNTER_EXHAUSTION = 21,
+ ERR_ACTION_INTERNAL_RESOURCE_EXHAUSTION = 22,
+ ERR_INTERNAL_QSL_COMPARE_FAILED = 23,
+ ERR_INTERNAL_CAT_FUNC_REUSE_FAILED = 24,
+ ERR_MATCH_ENTROPHY_FAILED = 25,
+ ERR_MATCH_CAM_EXHAUSTED = 26,
+ ERR_INTERNAL_VIRTUAL_PORT_CREATION_FAILED = 27,
ERR_ACTION_UNSUPPORTED = 28,
ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_ACTION_NO_OUTPUT_DEFINED_USE_DEFAULT = 30,
+ ERR_ACTION_NO_OUTPUT_QUEUE_FOUND = 31,
+ ERR_MATCH_UNSUPPORTED_ETHER_TYPE = 32,
ERR_OUTPUT_INVALID = 33,
+ ERR_MATCH_PARTIAL_OFFLOAD_NOT_SUPPORTED = 34,
+ ERR_MATCH_CAT_CAM_EXHAUSTED = 35,
+ ERR_MATCH_KCC_KEY_CLASH = 36,
+ ERR_MATCH_CAT_CAM_FAILED = 37,
+ ERR_PARTIAL_FLOW_MARK_TOO_BIG = 38,
+ ERR_FLOW_PRIORITY_VALUE_INVALID = 39,
ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_RSS_TOO_LONG_KEY = 41,
+ ERR_ACTION_AGE_UNSUPPORTED_GROUP_0 = 42,
ERR_MSG_NO_MSG
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b8da5eafba..13fad2760a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -54,6 +54,30 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+#define MAX_MATCH_FIELDS 16
+
+struct match_elem_s {
+ int masked_for_tcam; /* if potentially selected for TCAM */
+ uint32_t e_word[4];
+ uint32_t e_mask[4];
+
+ int extr_start_offs_id;
+ int8_t rel_offs;
+ uint32_t word_len;
+};
+
+struct km_flow_def_s {
+ struct flow_api_backend_s *be;
+
+ /* For collect flow elements and sorting */
+ struct match_elem_s match[MAX_MATCH_FIELDS];
+ int num_ftype_elem;
+
+ /* Flow information */
+ /* HW input port ID needed for compare. In port must be identical on flow types */
+ uint32_t port_id;
+};
+
enum flow_port_type_e {
PORT_NONE, /* not defined or drop */
PORT_INTERNAL, /* no queues attached */
@@ -99,6 +123,25 @@ struct nic_flow_def {
uint32_t jump_to_group;
int full_offload;
+
+ /*
+ * Modify field
+ */
+ struct {
+ uint32_t select;
+ union {
+ uint8_t value8[16];
+ uint16_t value16[8];
+ uint32_t value32[4];
+ };
+ } modify_field[MAX_CPY_WRITERS_SUPPORTED];
+
+ uint32_t modify_field_count;
+
+ /*
+ * Key Matcher flow definitions
+ */
+ struct km_flow_def_s km;
};
enum flow_handle_type {
@@ -159,6 +202,9 @@ struct flow_handle {
void km_free_ndev_resource_management(void **handle);
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start, int8_t offset);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 34154c65f8..99b207a01c 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -133,6 +133,39 @@ enum km_flm_if_select_e {
unsigned int alloced_size; \
int debug
+enum {
+ PROT_OTHER = 0,
+ PROT_L2_ETH2 = 1,
+};
+
+enum {
+ PROT_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_L4_ICMP = 4
+};
+
+enum {
+ PROT_TUN_L3_OTHER = 0,
+ PROT_TUN_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_ICMP = 4
+};
+
+
+enum {
+ CPY_SELECT_DSCP_IPV4 = 0,
+ CPY_SELECT_DSCP_IPV6 = 1,
+ CPY_SELECT_RQI_QFI = 2,
+ CPY_SELECT_IPV4 = 3,
+ CPY_SELECT_PORT = 4,
+ CPY_SELECT_TEID = 5,
+};
+
struct common_func_s {
COMMON_FUNC_INFO_S;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index e04cd5e857..237e9f7b4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -3,10 +3,38 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <assert.h>
#include <stdlib.h>
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
+#include "nt_util.h"
+
+#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+
+static const struct cam_match_masks_s {
+ uint32_t word_len;
+ uint32_t key_mask[4];
+} cam_masks[] = {
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff } }, /* IP6_SRC, IP6_DST */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffff0000 } }, /* DMAC,SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffff0000, 0x00000000, 0xffff0000 } }, /* DMAC,ethtype */
+ { 4, { 0x00000000, 0x0000ffff, 0xffffffff, 0xffff0000 } }, /* SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000 } }, /* ETH_128 */
+ { 2, { 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 } }, /* IP4_COMBINED */
+ /*
+ * ETH_TYPE, IP4_TTL_PROTO, IP4_SRC, IP4_DST, IP6_FLOW_TC,
+ * IP6_NEXT_HDR_HOP, TP_PORT_COMBINED, SIDEBAND_VNI
+ */
+ { 1, { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IP4_IHL_TOS, TP_PORT_SRC32_OR_ICMP, TCP_CTRL */
+ { 1, { 0xffff0000, 0x00000000, 0x00000000, 0x00000000 } },
+ { 1, { 0x0000ffff, 0x00000000, 0x00000000, 0x00000000 } }, /* TP_PORT_DST32 */
+ /* IPv4 TOS mask bits used often by OVS */
+ { 1, { 0x00030000, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IPv6 TOS mask bits used often by OVS */
+ { 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
+};
void km_free_ndev_resource_management(void **handle)
{
@@ -17,3 +45,56 @@ void km_free_ndev_resource_management(void **handle)
*handle = NULL;
}
+
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start_id, int8_t offset)
+{
+ /* valid word_len 1,2,4 */
+ if (word_len == 3) {
+ word_len = 4;
+ e_word[3] = 0;
+ e_mask[3] = 0;
+ }
+
+ if (word_len < 1 || word_len > 4) {
+ assert(0);
+ return -1;
+ }
+
+ for (unsigned int i = 0; i < word_len; i++) {
+ km->match[km->num_ftype_elem].e_word[i] = e_word[i];
+ km->match[km->num_ftype_elem].e_mask[i] = e_mask[i];
+ }
+
+ km->match[km->num_ftype_elem].word_len = word_len;
+ km->match[km->num_ftype_elem].rel_offs = offset;
+ km->match[km->num_ftype_elem].extr_start_offs_id = start_id;
+
+ /*
+ * Determine here if this flow may better be put into TCAM
+ * Otherwise it will go into CAM
+ * This is dependent on a cam_masks list defined above
+ */
+ km->match[km->num_ftype_elem].masked_for_tcam = 1;
+
+ for (unsigned int msk = 0; msk < NUM_CAM_MASKS; msk++) {
+ if (word_len == cam_masks[msk].word_len) {
+ int match = 1;
+
+ for (unsigned int wd = 0; wd < word_len; wd++) {
+ if (e_mask[wd] != cam_masks[msk].key_mask[wd]) {
+ match = 0;
+ break;
+ }
+ }
+
+ if (match) {
+ /* Can go into CAM */
+ km->match[km->num_ftype_elem].masked_for_tcam = 0;
+ }
+ }
+ }
+
+ km->num_ftype_elem++;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7f9869a511..0f136ee164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -416,10 +416,67 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return 0;
}
-static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
- uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
- uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
- uint32_t priority __rte_unused)
+static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def *fd,
+ const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
+ uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
+{
+ switch (fd->l4_prot) {
+ case PROT_L4_ICMP:
+ fh->flm_prot = fd->ip_prot;
+ break;
+
+ default:
+ switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_ICMP:
+ fh->flm_prot = fd->tunnel_ip_prot;
+ break;
+
+ default:
+ fh->flm_prot = 0;
+ break;
+ }
+
+ break;
+ }
+
+ memcpy(fh->flm_data, packet_data, sizeof(uint32_t) * 10);
+
+ fh->flm_kid = flm_key_id;
+ fh->flm_rpl_ext_ptr = rpl_ext_ptr;
+ fh->flm_prio = (uint8_t)priority;
+ fh->flm_ft = (uint8_t)flm_ft;
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+ case CPY_SELECT_RQI_QFI:
+ fh->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ fh->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ fh->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ fh->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ fh->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
+ uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
+ uint32_t flm_scrub, uint32_t priority)
{
struct nic_flow_def *fd;
struct flow_handle fh_copy;
@@ -443,6 +500,9 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
for (int i = 0; i < RES_COUNT; ++i)
fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+ copy_fd_to_fh_flm(fh, fd, packet_data, flm_key_id, flm_ft, rpl_ext_ptr, flm_scrub,
+ priority);
+
free(fd);
return 0;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 10/73] net/ntnic: add action queue
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (8 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 11/73] net/ntnic: add action mark Serhii Iliushyk
` (62 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_QUEUE
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 37 +++++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 1c653fd5a0..5b3c26da05 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,3 +18,4 @@ any = Y
[rte_flow actions]
port_id = Y
+queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0f136ee164..a3fe2fe902 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -23,6 +23,15 @@
static void *flm_lrn_queue_arr;
+static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
+{
+ for (int i = 0; i < dev->num_queues; ++i)
+ if (dev->rx_queue[i].id == id)
+ return dev->rx_queue[i].hw_id;
+
+ return -1;
+}
+
struct flm_flow_key_def_s {
union {
struct {
@@ -349,6 +358,34 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_queue queue_tmp;
+ const struct rte_flow_action_queue *queue =
+ memcpy_mask_if(&queue_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_queue));
+
+ int hw_id = rx_queue_idx_to_hw_id(dev, queue->index);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE port %u, queue index: %u, hw id %u",
+ dev, dev->port, queue->index, hw_id);
+
+ fd->full_offload = 0;
+ *num_queues += 1;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 11/73] net/ntnic: add action mark
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (9 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 10/73] net/ntnic: add action queue Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 12/73] net/ntnic: add ation jump Serhii Iliushyk
` (61 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_MARK
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 5b3c26da05..42ac9f9c31 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,5 +17,6 @@ x86-64 = Y
any = Y
[rte_flow actions]
+mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a3fe2fe902..96b7192edc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -386,6 +386,22 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_mark mark_tmp;
+ const struct rte_flow_action_mark *mark =
+ memcpy_mask_if(&mark_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_mark));
+
+ fd->mark = mark->id;
+ NT_LOG(DBG, FILTER, "Mark: %i", mark->id);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 12/73] net/ntnic: add ation jump
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (10 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 11/73] net/ntnic: add action mark Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 13/73] net/ntnic: add action drop Serhii Iliushyk
` (60 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_JUMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 42ac9f9c31..f3334fc86d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+jump = Y
mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 96b7192edc..603039374a 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -402,6 +402,23 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_jump jump_tmp;
+ const struct rte_flow_action_jump *jump =
+ memcpy_mask_if(&jump_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_jump));
+
+ fd->jump_to_group = jump->group;
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP: group %u",
+ dev, jump->group);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 13/73] net/ntnic: add action drop
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (11 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 12/73] net/ntnic: add ation jump Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 14/73] net/ntnic: add item eth Serhii Iliushyk
` (59 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ACTION_TYPE_DROP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index f3334fc86d..372653695d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+drop = Y
jump = Y
mark = Y
port_id = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 603039374a..64168fcc7d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -419,6 +419,18 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_DROP", dev);
+
+ if (action[aidx].conf) {
+ fd->dst_id[fd->dst_num_avail].owning_port_id = 0;
+ fd->dst_id[fd->dst_num_avail].id = 0;
+ fd->dst_id[fd->dst_num_avail].type = PORT_NONE;
+ fd->dst_num_avail++;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 14/73] net/ntnic: add item eth
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (12 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 13/73] net/ntnic: add action drop Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
` (58 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_ETH
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 23 +++
.../profile_inline/flow_api_profile_inline.c | 180 ++++++++++++++++++
3 files changed, 204 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 372653695d..36b8212bae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -15,6 +15,7 @@ x86-64 = Y
[rte_flow items]
any = Y
+eth = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 99b207a01c..0c22129fb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,29 @@ enum {
} \
} while (0)
+static inline int is_non_zero(const void *addr, size_t n)
+{
+ size_t i = 0;
+ const uint8_t *p = (const uint8_t *)addr;
+
+ for (i = 0; i < n; i++)
+ if (p[i] != 0)
+ return 1;
+
+ return 0;
+}
+
+enum frame_offs_e {
+ DYN_L2 = 1,
+ DYN_L3 = 4,
+ DYN_L4 = 7,
+ DYN_L4_PAYLOAD = 8,
+ DYN_TUN_L3 = 13,
+ DYN_TUN_L4 = 16,
+};
+
+/* Sideband info bit indicator */
+
enum km_flm_if_select_e {
KM_FLM_IF_FIRST = 0,
KM_FLM_IF_SECOND = 1
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 64168fcc7d..93f666a054 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -55,6 +55,36 @@ struct flm_flow_key_def_s {
/*
* Flow Matcher functionality
*/
+static inline void set_key_def_qw(struct flm_flow_key_def_s *key_def, unsigned int qw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(qw < 2);
+
+ if (qw == 0) {
+ key_def->qw0_dyn = dyn & 0x7f;
+ key_def->qw0_ofs = ofs & 0xff;
+
+ } else {
+ key_def->qw4_dyn = dyn & 0x7f;
+ key_def->qw4_ofs = ofs & 0xff;
+ }
+}
+
+static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned int sw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(sw < 2);
+
+ if (sw == 0) {
+ key_def->sw8_dyn = dyn & 0x7f;
+ key_def->sw8_ofs = ofs & 0xff;
+
+ } else {
+ key_def->sw9_dyn = dyn & 0x7f;
+ key_def->sw9_ofs = ofs & 0xff;
+ }
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -457,6 +487,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
+ uint32_t any_count = 0;
+
+ unsigned int qw_counter = 0;
+ unsigned int sw_counter = 0;
+
*in_port_id = UINT32_MAX;
memset(packet_data, 0x0, sizeof(uint32_t) * 10);
@@ -472,6 +507,28 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH: {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (eth_spec != NULL && eth_mask != NULL) {
+ if (is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6)) {
+ qw_reserved_mac += 1;
+ }
+ }
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
if (qw_free < 0) {
@@ -484,6 +541,129 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
switch (elem[eidx].type) {
case RTE_FLOW_ITEM_TYPE_ANY:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ any_count += 1;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ETH",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (any_count > 0) {
+ NT_LOG(ERR, FILTER,
+ "Tunneled L2 ethernet not supported");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (eth_spec == NULL || eth_mask == NULL) {
+ fd->l2_prot = PROT_L2_ETH2;
+ break;
+ }
+
+ int non_zero = is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6);
+
+ if (non_zero ||
+ (eth_mask->ether_type != 0 && sw_counter >= 2)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ((eth_spec->dst_addr.addr_bytes[0] &
+ eth_mask->dst_addr.addr_bytes[0]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[1] &
+ eth_mask->dst_addr.addr_bytes[1]) << 16) +
+ ((eth_spec->dst_addr.addr_bytes[2] &
+ eth_mask->dst_addr.addr_bytes[2]) << 8) +
+ (eth_spec->dst_addr.addr_bytes[3] &
+ eth_mask->dst_addr.addr_bytes[3]);
+
+ qw_data[1] = ((eth_spec->dst_addr.addr_bytes[4] &
+ eth_mask->dst_addr.addr_bytes[4]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[5] &
+ eth_mask->dst_addr.addr_bytes[5]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[0] &
+ eth_mask->src_addr.addr_bytes[0]) << 8) +
+ (eth_spec->src_addr.addr_bytes[1] &
+ eth_mask->src_addr.addr_bytes[1]);
+
+ qw_data[2] = ((eth_spec->src_addr.addr_bytes[2] &
+ eth_mask->src_addr.addr_bytes[2]) << 24) +
+ ((eth_spec->src_addr.addr_bytes[3] &
+ eth_mask->src_addr.addr_bytes[3]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[4] &
+ eth_mask->src_addr.addr_bytes[4]) << 8) +
+ (eth_spec->src_addr.addr_bytes[5] &
+ eth_mask->src_addr.addr_bytes[5]);
+
+ qw_data[3] = ntohs(eth_spec->ether_type &
+ eth_mask->ether_type) << 16;
+
+ qw_mask[0] = (eth_mask->dst_addr.addr_bytes[0] << 24) +
+ (eth_mask->dst_addr.addr_bytes[1] << 16) +
+ (eth_mask->dst_addr.addr_bytes[2] << 8) +
+ eth_mask->dst_addr.addr_bytes[3];
+
+ qw_mask[1] = (eth_mask->dst_addr.addr_bytes[4] << 24) +
+ (eth_mask->dst_addr.addr_bytes[5] << 16) +
+ (eth_mask->src_addr.addr_bytes[0] << 8) +
+ eth_mask->src_addr.addr_bytes[1];
+
+ qw_mask[2] = (eth_mask->src_addr.addr_bytes[2] << 24) +
+ (eth_mask->src_addr.addr_bytes[3] << 16) +
+ (eth_mask->src_addr.addr_bytes[4] << 8) +
+ eth_mask->src_addr.addr_bytes[5];
+
+ qw_mask[3] = ntohs(eth_mask->ether_type) << 16;
+
+ km_add_match_elem(&fd->km,
+ &qw_data[(size_t)(qw_counter * 4)],
+ &qw_mask[(size_t)(qw_counter * 4)], 4, DYN_L2, 0);
+ set_key_def_qw(key_def, qw_counter, DYN_L2, 0);
+ qw_counter += 1;
+
+ if (!non_zero)
+ qw_free -= 1;
+
+ } else if (eth_mask->ether_type != 0) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(eth_mask->ether_type) << 16;
+ sw_data[0] = ntohs(eth_spec->ether_type) << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, DYN_L2, 12);
+ set_key_def_sw(key_def, sw_counter, DYN_L2, 12);
+ sw_counter += 1;
+ }
+
+ fd->l2_prot = PROT_L2_ETH2;
+ }
+
+ break;
+
dev->ndev->adapter_no, dev->port);
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 15/73] net/ntnic: add item IPv4
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (13 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 14/73] net/ntnic: add item eth Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 16/73] net/ntnic: add item ICMP Serhii Iliushyk
` (57 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 162 ++++++++++++++++++
2 files changed, 163 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 36b8212bae..bae25d2e2d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+ipv4 = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 93f666a054..d5d853351e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -664,7 +664,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv4 *ipv4_spec =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
+
+ if (ipv4_spec == NULL || ipv4_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.version_ihl != 0 ||
+ ipv4_mask->hdr.type_of_service != 0 ||
+ ipv4_mask->hdr.total_length != 0 ||
+ ipv4_mask->hdr.packet_id != 0 ||
+ (ipv4_mask->hdr.fragment_offset != 0 &&
+ (ipv4_spec->hdr.fragment_offset != 0xffff ||
+ ipv4_mask->hdr.fragment_offset != 0xffff)) ||
+ ipv4_mask->hdr.time_to_live != 0 ||
+ ipv4_mask->hdr.hdr_checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv4 field not support by running SW version.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (ipv4_spec->hdr.fragment_offset == 0xffff &&
+ ipv4_mask->hdr.fragment_offset == 0xffff) {
+ fd->fragmentation = 0xfe;
+ }
+
+ int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
+ (ipv4_mask->hdr.dst_addr != 0) +
+ (ipv4_mask->hdr.next_proto_id != 0);
+
+ if (match_cnt <= 0) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (qw_free > 0 &&
+ (match_cnt >= 2 ||
+ (match_cnt == 1 && sw_counter >= 2))) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED,
+ error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_mask[0] = 0;
+ qw_data[0] = 0;
+
+ qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
+ qw_data[1] = ipv4_spec->hdr.next_proto_id
+ << 16 & qw_mask[1];
+
+ qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
+ qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
+
+ qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
+ qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.src_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 12);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.dst_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 16);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.next_proto_id) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
+ sw_data[0] = ipv4_spec->hdr.next_proto_id
+ << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ sw_counter += 1;
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 16/73] net/ntnic: add item ICMP
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (14 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 17/73] net/ntnic: add item port ID Serhii Iliushyk
` (56 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_ICMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 101 ++++++++++++++++++
2 files changed, 102 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index bae25d2e2d..d403ea01f3 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+icmp = Y
ipv4 = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d5d853351e..6bf0ff8821 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -827,6 +827,107 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp *icmp_spec =
+ (const struct rte_flow_item_icmp *)elem[eidx].spec;
+ const struct rte_flow_item_icmp *icmp_mask =
+ (const struct rte_flow_item_icmp *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->hdr.icmp_cksum != 0 ||
+ icmp_mask->hdr.icmp_ident != 0 ||
+ icmp_mask->hdr.icmp_seq_nb != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->hdr.icmp_type || icmp_mask->hdr.icmp_code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ sw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter,
+ any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 17/73] net/ntnic: add item port ID
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (15 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 16/73] net/ntnic: add item ICMP Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 18/73] net/ntnic: add item void Serhii Iliushyk
` (55 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_PORT_ID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../flow_api/profile_inline/flow_api_profile_inline.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index d403ea01f3..cdf119c4ae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,6 +18,7 @@ any = Y
eth = Y
icmp = Y
ipv4 = Y
+port_id = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6bf0ff8821..efefd52979 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -928,6 +928,17 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
+ dev->ndev->adapter_no, dev->port);
+
+ if (elem[eidx].spec) {
+ *in_port_id =
+ ((const struct rte_flow_item_port_id *)elem[eidx].spec)->id;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 18/73] net/ntnic: add item void
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (16 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 17/73] net/ntnic: add item port ID Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 19/73] net/ntnic: add item UDP Serhii Iliushyk
` (54 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Add possibility to use RTE_FLOW_ITEM_TYPE_VOID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../nthw/flow_api/profile_inline/flow_api_profile_inline.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index efefd52979..e47014615e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -939,6 +939,10 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_VOID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VOID",
+ dev->ndev->adapter_no, dev->port);
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 19/73] net/ntnic: add item UDP
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (17 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 18/73] net/ntnic: add item void Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 20/73] net/ntnic: add action TCP Serhii Iliushyk
` (53 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_UDP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 103 ++++++++++++++++++
3 files changed, 106 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index cdf119c4ae..61a3d87909 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+udp = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 0c22129fb4..a95fb69870 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e47014615e..3d4bb6e1eb 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -828,6 +828,101 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_udp *udp_spec =
+ (const struct rte_flow_item_udp *)elem[eidx].spec;
+ const struct rte_flow_item_udp *udp_mask =
+ (const struct rte_flow_item_udp *)elem[eidx].mask;
+
+ if (udp_spec == NULL || udp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (udp_mask->hdr.dgram_len != 0 ||
+ udp_mask->hdr.dgram_cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested UDP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (udp_mask->hdr.src_port || udp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(udp_mask->hdr.src_port) << 16) |
+ ntohs(udp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(udp_mask->hdr.src_port)
+ << 16) | ntohs(udp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -961,12 +1056,20 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 20/73] net/ntnic: add action TCP
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (18 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 19/73] net/ntnic: add item UDP Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 21/73] net/ntnic: add action VLAN Serhii Iliushyk
` (52 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_TCP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 108 ++++++++++++++++++
3 files changed, 111 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 61a3d87909..e3c3982895 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+tcp = Y
udp = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a95fb69870..a1aa74caf5 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -177,6 +178,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 3d4bb6e1eb..f24178a164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1024,6 +1024,106 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_tcp *tcp_spec =
+ (const struct rte_flow_item_tcp *)elem[eidx].spec;
+ const struct rte_flow_item_tcp *tcp_mask =
+ (const struct rte_flow_item_tcp *)elem[eidx].mask;
+
+ if (tcp_spec == NULL || tcp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (tcp_mask->hdr.sent_seq != 0 ||
+ tcp_mask->hdr.recv_ack != 0 ||
+ tcp_mask->hdr.data_off != 0 ||
+ tcp_mask->hdr.tcp_flags != 0 ||
+ tcp_mask->hdr.rx_win != 0 ||
+ tcp_mask->hdr.cksum != 0 ||
+ tcp_mask->hdr.tcp_urp != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested TCP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (tcp_mask->hdr.src_port || tcp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ sw_data[0] =
+ ((ntohs(tcp_spec->hdr.src_port) << 16) |
+ ntohs(tcp_spec->hdr.dst_port)) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(tcp_spec->hdr.src_port)
+ << 16) | ntohs(tcp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1056,6 +1156,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_L4_UDP:
fh->flm_prot = 17;
break;
@@ -1066,6 +1170,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_TUN_L4_UDP:
fh->flm_prot = 17;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 21/73] net/ntnic: add action VLAN
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (19 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 20/73] net/ntnic: add action TCP Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 22/73] net/ntnic: add item SCTP Serhii Iliushyk
` (51 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_VLAN
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 94 +++++++++++++++++++
3 files changed, 96 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e3c3982895..8b4821d6d0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -21,6 +21,7 @@ ipv4 = Y
port_id = Y
tcp = Y
udp = Y
+vlan = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a1aa74caf5..82ac3d0ff3 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -134,6 +134,7 @@ static inline int is_non_zero(const void *addr, size_t n)
enum frame_offs_e {
DYN_L2 = 1,
+ DYN_FIRST_VLAN = 2,
DYN_L3 = 4,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f24178a164..7c1b632dc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -504,6 +504,20 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return -1;
}
+ if (implicit_vlan_vid > 0) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = 0x0fff;
+ sw_data[0] = implicit_vlan_vid & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1, DYN_FIRST_VLAN, 0);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN, 0);
+ sw_counter += 1;
+
+ fd->vlans += 1;
+ }
+
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
@@ -664,6 +678,86 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VLAN",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_vlan_hdr *vlan_spec =
+ (const struct rte_vlan_hdr *)elem[eidx].spec;
+ const struct rte_vlan_hdr *vlan_mask =
+ (const struct rte_vlan_hdr *)elem[eidx].mask;
+
+ if (vlan_spec == NULL || vlan_mask == NULL) {
+ fd->vlans += 1;
+ break;
+ }
+
+ if (!vlan_mask->vlan_tci && !vlan_mask->eth_proto)
+ break;
+
+ if (implicit_vlan_vid > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple VLANs not supported for implicit VLAN patterns.");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM,
+ error);
+ return -1;
+ }
+
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ sw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_qw(key_def, qw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ fd->vlans += 1;
+ }
+
+ break;
case RTE_FLOW_ITEM_TYPE_IPV4:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 22/73] net/ntnic: add item SCTP
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (20 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 21/73] net/ntnic: add action VLAN Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
` (50 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_SCTP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 102 ++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b4821d6d0..6691b6dce2 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+sctp = Y
tcp = Y
udp = Y
vlan = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 82ac3d0ff3..f1c57fa9fc 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -169,6 +169,7 @@ enum {
enum {
PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
+ PROT_L4_SCTP = 3,
PROT_L4_ICMP = 4
};
@@ -181,6 +182,7 @@ enum {
PROT_TUN_L4_OTHER = 0,
PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
+ PROT_TUN_L4_SCTP = 3,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7c1b632dc0..9460325cf6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1017,6 +1017,100 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ NT_LOG(DBG, FILTER, "Adap %i,Port %i:RTE_FLOW_ITEM_TYPE_SCTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_sctp *sctp_spec =
+ (const struct rte_flow_item_sctp *)elem[eidx].spec;
+ const struct rte_flow_item_sctp *sctp_mask =
+ (const struct rte_flow_item_sctp *)elem[eidx].mask;
+
+ if (sctp_spec == NULL || sctp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (sctp_mask->hdr.tag != 0 || sctp_mask->hdr.cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested SCTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (sctp_mask->hdr.src_port || sctp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -1258,6 +1352,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
@@ -1272,6 +1370,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_TUN_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 23/73] net/ntnic: add items IPv6 and ICMPv6
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (21 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 22/73] net/ntnic: add item SCTP Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 24/73] net/ntnic: add action modify filed Serhii Iliushyk
` (49 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_IPV6
* RTE_FLOW_ITEM_TYPE_ICMP6
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 27 ++
.../profile_inline/flow_api_profile_inline.c | 273 ++++++++++++++++++
4 files changed, 304 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 6691b6dce2..320d3c7e0b 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,7 +17,9 @@ x86-64 = Y
any = Y
eth = Y
icmp = Y
+icmp6 = Y
ipv4 = Y
+ipv6 = Y
port_id = Y
sctp = Y
tcp = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index f1c57fa9fc..4f381bc0ef 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -164,6 +164,7 @@ enum {
enum {
PROT_L3_IPV4 = 1,
+ PROT_L3_IPV6 = 2
};
enum {
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
+ PROT_TUN_L3_IPV6 = 2
};
enum {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 6800a8d834..2aee2ee973 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -47,6 +47,33 @@ static const struct {
} err_msg[] = {
/* 00 */ { "Operation successfully completed" },
/* 01 */ { "Operation failed" },
+ /* 02 */ { "Memory allocation failed" },
+ /* 03 */ { "Too many output destinations" },
+ /* 04 */ { "Too many output queues for RSS" },
+ /* 05 */ { "The VLAN TPID specified is not supported" },
+ /* 06 */ { "The VxLan Push header specified is not accepted" },
+ /* 07 */ { "While interpreting VxLan Pop action, could not find a destination port" },
+ /* 08 */ { "Failed in creating a HW-internal VTEP port" },
+ /* 09 */ { "Too many VLAN tag matches" },
+ /* 10 */ { "IPv6 invalid header specified" },
+ /* 11 */ { "Too many tunnel ports. HW limit reached" },
+ /* 12 */ { "Unknown or unsupported flow match element received" },
+ /* 13 */ { "Match failed because of HW limitations" },
+ /* 14 */ { "Match failed because of HW resource limitations" },
+ /* 15 */ { "Match failed because of too complex element definitions" },
+ /* 16 */ { "Action failed. To too many output destinations" },
+ /* 17 */ { "Action Output failed, due to HW resource exhaustion" },
+ /* 18 */ { "Push Tunnel Header action cannot output to multiple destination queues" },
+ /* 19 */ { "Inline action HW resource exhaustion" },
+ /* 20 */ { "Action retransmit/recirculate HW resource exhaustion" },
+ /* 21 */ { "Flow counter HW resource exhaustion" },
+ /* 22 */ { "Internal HW resource exhaustion to handle Actions" },
+ /* 23 */ { "Internal HW QSL compare failed" },
+ /* 24 */ { "Internal CAT CFN reuse failed" },
+ /* 25 */ { "Match variations too complex" },
+ /* 26 */ { "Match failed because of CAM/TCAM full" },
+ /* 27 */ { "Internal creation of a tunnel end point port failed" },
+ /* 28 */ { "Unknown or unsupported flow action received" },
/* 29 */ { "Removing flow failed" },
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9460325cf6..0b0b9f2033 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -538,6 +538,22 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6: {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec != NULL && ipv6_mask != NULL) {
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16))
+ qw_reserved_ipv6 += 1;
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16))
+ qw_reserved_ipv6 += 1;
+ }
+ }
+ break;
+
default:
break;
}
@@ -922,6 +938,164 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec == NULL || ipv6_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ break;
+ }
+
+ fd->l3_prot = PROT_L3_IPV6;
+ if (ipv6_mask->hdr.vtc_flow != 0 ||
+ ipv6_mask->hdr.payload_len != 0 ||
+ ipv6_mask->hdr.hop_limits != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv6 field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.src_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.src_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ qw_counter += 1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.dst_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 24);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 24);
+ qw_counter += 1;
+ }
+
+ if (ipv6_mask->hdr.proto != 0) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv6_mask->hdr.proto << 8;
+ sw_data[0] = ipv6_spec->hdr.proto << 8 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = 0;
+ qw_data[1] = ipv6_mask->hdr.proto << 8;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = 0;
+ qw_mask[1] = ipv6_spec->hdr.proto << 8;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_UDP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
dev->ndev->adapter_no, dev->port);
@@ -1212,6 +1386,105 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp6 *icmp_spec =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].spec;
+ const struct rte_flow_item_icmp6 *icmp_mask =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP6 field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->type || icmp_mask->code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ sw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_TCP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 24/73] net/ntnic: add action modify filed
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (22 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
` (48 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 181 ++++++++++++++++++
4 files changed, 190 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 320d3c7e0b..4201c8e8b9 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -30,5 +30,6 @@ vlan = Y
drop = Y
jump = Y
mark = Y
+modify_field = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 13fad2760a..f6557d0d20 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,10 @@ struct nic_flow_def {
*/
struct {
uint32_t select;
+ uint32_t dyn;
+ uint32_t ofs;
+ uint32_t len;
+ uint32_t level;
union {
uint8_t value8[16];
uint16_t value16[8];
@@ -137,6 +141,9 @@ struct nic_flow_def {
} modify_field[MAX_CPY_WRITERS_SUPPORTED];
uint32_t modify_field_count;
+ uint8_t ttl_sub_enable;
+ uint8_t ttl_sub_ipv4;
+ uint8_t ttl_sub_outer;
/*
* Key Matcher flow definitions
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 4f381bc0ef..6a8a38636f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -140,6 +140,7 @@ enum frame_offs_e {
DYN_L4_PAYLOAD = 8,
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
+ DYN_TUN_L4_PAYLOAD = 17,
};
/* Sideband info bit indicator */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0b0b9f2033..2cda2e8b14 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -323,6 +323,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
{
unsigned int encap_decap_order = 0;
+ uint64_t modify_field_use_flags = 0x0;
+
*num_dest_port = 0;
*num_queues = 0;
@@ -461,6 +463,185 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
+ {
+ /* Note: This copy method will not work for FLOW_FIELD_POINTER */
+ struct rte_flow_action_modify_field modify_field_tmp;
+ const struct rte_flow_action_modify_field *modify_field =
+ memcpy_mask_if(&modify_field_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_modify_field));
+
+ uint64_t modify_field_use_flag = 0;
+
+ if (modify_field->src.field != RTE_FLOW_FIELD_VALUE) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only src type VALUE is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.level > 2) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only dst level 0, 1, and 2 is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL ||
+ modify_field->dst.field == RTE_FLOW_FIELD_IPV6_HOPLIMIT) {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SUB) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SUB is supported for TTL/HOPLIMIT.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->ttl_sub_enable) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD TTL/HOPLIMIT resource already in use.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->ttl_sub_enable = 1;
+ fd->ttl_sub_ipv4 =
+ (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL)
+ ? 1
+ : 0;
+ fd->ttl_sub_outer = (modify_field->dst.level <= 1) ? 1 : 0;
+
+ } else {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SET) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SET is supported in general.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->modify_field_count >=
+ dev->ndev->be.tpe.nb_cpy_writers) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD exceeded maximum of %u MODIFY_FIELD actions.",
+ dev->ndev->be.tpe.nb_cpy_writers);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ int mod_outer = modify_field->dst.level <= 1;
+
+ switch (modify_field->dst.field) {
+ case RTE_FLOW_FIELD_IPV4_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 1;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV6_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV6;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ /*
+ * len=2 is needed because
+ * IPv6 DSCP overlaps 2 bytes.
+ */
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_PSC_QFI:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_RQI_QFI;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 14;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 12;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 16;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_SRC:
+ case RTE_FLOW_FIELD_UDP_PORT_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_DST:
+ case RTE_FLOW_FIELD_UDP_PORT_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 2;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_TEID:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_TEID;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 4;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type is not supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ modify_field_use_flag = 1
+ << fd->modify_field[fd->modify_field_count].select;
+
+ if (modify_field_use_flag & modify_field_use_flags) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type hardware resource already used.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ memcpy(fd->modify_field[fd->modify_field_count].value8,
+ modify_field->src.value, 16);
+
+ fd->modify_field[fd->modify_field_count].level =
+ modify_field->dst.level;
+
+ modify_field_use_flags |= modify_field_use_flag;
+ fd->modify_field_count += 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 25/73] net/ntnic: add items gtp and actions raw encap/decap
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (23 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 24/73] net/ntnic: add action modify filed Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 26/73] net/ntnic: add cat module Serhii Iliushyk
` (47 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_GTP
* RTE_FLOW_ITEM_TYPE_GTP_PSC
* RTE_FLOW_ACTION_TYPE_RAW_ENCAP
* RTE_FLOW_ACTION_TYPE_RAW_DECAP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 4 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/flow_api_engine.h | 40 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/include/stream_binary_flow_api.h | 22 ++
.../profile_inline/flow_api_profile_inline.c | 366 +++++++++++++++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 278 ++++++++++++-
7 files changed, 713 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4201c8e8b9..4cb9509742 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,8 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+gtp = Y
+gtp_psc = Y
icmp = Y
icmp6 = Y
ipv4 = Y
@@ -33,3 +35,5 @@ mark = Y
modify_field = Y
port_id = Y
queue = Y
+raw_decap = Y
+raw_encap = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 179542d2b2..70e6cad195 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,8 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct flow_action_raw_encap encap;
+ struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
};
@@ -52,6 +54,8 @@ enum nt_rte_flow_item_type {
};
extern rte_spinlock_t flow_lock;
+
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out);
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index f6557d0d20..b1d39b919b 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -56,6 +56,29 @@ enum res_type_e {
#define MAX_MATCH_FIELDS 16
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+ uint32_t user_port_id;
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+ uint16_t ip_csum_precalc;
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
+};
+
struct match_elem_s {
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
@@ -124,6 +147,23 @@ struct nic_flow_def {
int full_offload;
+ /*
+ * Action push tunnel
+ */
+ struct tunnel_header_s tun_hdr;
+
+ /*
+ * If DPDK RTE tunnel helper API used
+ * this holds the tunnel if used in flow
+ */
+ struct tunnel_s *tnl;
+
+ /*
+ * Header Stripper
+ */
+ int header_strip_end_dyn;
+ int header_strip_end_ofs;
+
/*
* Modify field
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6a8a38636f..1b45ea4296 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -175,6 +175,10 @@ enum {
PROT_L4_ICMP = 4
};
+enum {
+ PROT_TUN_GTPV1U = 6,
+};
+
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index d878b848c2..8097518d61 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -18,6 +18,7 @@
#define FLOW_MAX_QUEUES 128
+#define RAW_ENCAP_DECAP_ELEMS_MAX 16
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
@@ -31,6 +32,27 @@ struct flow_queue_id_s {
int hw_id;
};
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ */
+struct flow_action_raw_encap {
+ uint8_t *data;
+ uint8_t *preserve;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ */
+struct flow_action_raw_decap {
+ uint8_t *data;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
struct flow_eth_dev; /* port device */
struct flow_handle;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2cda2e8b14..9fc4908975 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -463,6 +463,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
+
+ if (action[aidx].conf) {
+ const struct flow_action_raw_encap *encap =
+ (const struct flow_action_raw_encap *)action[aidx].conf;
+ const struct flow_action_raw_encap *encap_mask = action_mask
+ ? (const struct flow_action_raw_encap *)action_mask[aidx]
+ .conf
+ : NULL;
+ const struct rte_flow_item *items = encap->items;
+
+ if (encap_decap_order != 1) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (encap->size == 0 || encap->size > 255 ||
+ encap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP data/size invalid.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 2;
+
+ fd->tun_hdr.len = (uint8_t)encap->size;
+
+ if (encap_mask) {
+ memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data,
+ encap_mask->data, fd->tun_hdr.len);
+
+ } else {
+ memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len);
+ }
+
+ while (items->type != RTE_FLOW_ITEM_TYPE_END) {
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ fd->tun_hdr.l2_len = 14;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->tun_hdr.nb_vlans += 1;
+ fd->tun_hdr.l2_len += 4;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ fd->tun_hdr.ip_version = 4;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->tun_hdr.ip_version = 6;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr);
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->tun_hdr.l4_len =
+ sizeof(struct rte_flow_item_icmp6);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 3] = 0xfd;
+ break;
+
+ default:
+ break;
+ }
+
+ items++;
+ }
+
+ if (fd->tun_hdr.nb_vlans > 3) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Encapsulation with %d vlans not supported.",
+ (int)fd->tun_hdr.nb_vlans);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ /* Convert encap data to 128-bit little endian */
+ for (size_t i = 0; i < (encap->size + 15) / 16; ++i) {
+ uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16;
+
+ for (unsigned int j = 0; j < 8; ++j) {
+ uint8_t t = data[j];
+ data[j] = data[15 - j];
+ data[15 - j] = t;
+ }
+ }
+ }
+
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev);
+
+ if (action[aidx].conf) {
+ /* Mask is N/A for RAW_DECAP */
+ const struct flow_action_raw_decap *decap =
+ (const struct flow_action_raw_decap *)action[aidx].conf;
+
+ if (encap_decap_order != 0) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (decap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_DECAP must decap something.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 1;
+
+ switch (decap->items[decap->item_count - 2].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->header_strip_end_dyn = DYN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->header_strip_end_dyn = DYN_L4;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->header_strip_end_dyn = DYN_L4_PAYLOAD;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ fd->header_strip_end_dyn = DYN_TUN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ default:
+ fd->header_strip_end_dyn = DYN_L2;
+ fd->header_strip_end_ofs = 0;
+ break;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
{
@@ -1766,6 +1962,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_hdr *gtp_spec =
+ (const struct rte_gtp_hdr *)elem[eidx].spec;
+ const struct rte_gtp_hdr *gtp_mask =
+ (const struct rte_gtp_hdr *)elem[eidx].mask;
+
+ if (gtp_spec == NULL || gtp_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_mask->gtp_hdr_info != 0 ||
+ gtp_mask->msg_type != 0 || gtp_mask->plen != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_mask->teid) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_mask->teid);
+ sw_data[0] =
+ ntohl(gtp_spec->teid) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_spec->teid);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_mask->teid);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP_PSC:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_spec =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec;
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_mask =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask;
+
+ if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_psc_mask->type != 0 ||
+ gtp_psc_mask->ext_hdr_len != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP PSC field is not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_psc_mask->qfi) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ sw_data[0] = ntohl(gtp_psc_spec->qfi) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 14);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_psc_spec->qfi);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 14);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1929,7 +2293,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
- uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index b9d723c9dd..df391b6399 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -16,6 +16,211 @@
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out)
+{
+ int hdri = 0;
+ int pkti = 0;
+
+ /* Ethernet */
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_ether_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ether_hdr);
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* VLAN */
+ while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) {
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_vlan_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_vlan_hdr);
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 3 */
+ uint8_t next_header = 0;
+
+ if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) {
+ if (size - pkti < (int)sizeof(struct rte_ipv4_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 9];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv4_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 4 */
+ int gtpu_encap = 0;
+
+ if (next_header == 1) { /* ICMP */
+ if (size - pkti < (int)sizeof(struct rte_icmp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 58) { /* ICMP6 */
+ if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 6) { /* TCP */
+ if (size - pkti < (int)sizeof(struct rte_tcp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_tcp_hdr);
+
+ } else if (next_header == 17) { /* UDP */
+ if (size - pkti < (int)sizeof(struct rte_udp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port ==
+ rte_cpu_to_be_16(RTE_GTPU_UDP_PORT);
+
+ hdri += 1;
+ pkti += sizeof(struct rte_udp_hdr);
+
+ } else if (next_header == 132) {/* SCTP */
+ if (size - pkti < (int)sizeof(struct rte_sctp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_sctp_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* GTPv1-U */
+ if (gtpu_encap) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti])
+ ->e;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr);
+
+ if (extension_present_bit) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti])
+ ->next_ext;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr_ext_word);
+
+ while (next_ext) {
+ size_t ext_len = data[pkti] * 4;
+
+ if (size - pkti < (int)ext_len)
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_ext = data[pkti + ext_len - 1];
+
+ hdri += 1;
+ pkti += ext_len;
+ }
+ }
+ }
+
+ if (size - pkti != 0)
+ return -1;
+
+interpret_end:
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_END;
+ out[hdri].spec = NULL;
+ out[hdri].mask = NULL;
+
+ return hdri + 1;
+}
+
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
{
if (error) {
@@ -95,13 +300,78 @@ int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item
return (type >= 0) ? 0 : -1;
}
-int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- int max_elem __rte_unused,
- uint32_t queue_offset __rte_unused)
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset)
{
+ int aidx = 0;
int type = -1;
+ do {
+ type = actions[aidx].type;
+ if (type >= 0) {
+ action->flow_actions[aidx].type = type;
+
+ /*
+ * Non-compatible actions handled here
+ */
+ switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
+ const struct rte_flow_action_raw_decap *decap =
+ (const struct rte_flow_action_raw_decap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(decap->data, NULL, decap->size,
+ action->decap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->decap.data = decap->data;
+ action->decap.size = decap->size;
+ action->decap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->decap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: {
+ const struct rte_flow_action_raw_encap *encap =
+ (const struct rte_flow_action_raw_encap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(encap->data, encap->preserve,
+ encap->size, action->encap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->encap.data = encap->data;
+ action->encap.preserve = encap->preserve;
+ action->encap.size = encap->size;
+ action->encap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->encap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE: {
+ const struct rte_flow_action_queue *queue =
+ (const struct rte_flow_action_queue *)actions[aidx].conf;
+ action->queue.index = queue->index + queue_offset;
+ action->flow_actions[aidx].conf = &action->queue;
+ }
+ break;
+
+ default: {
+ action->flow_actions[aidx].conf = actions[aidx].conf;
+ }
+ break;
+ }
+
+ aidx++;
+
+ if (aidx == max_elem)
+ return -1;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
return (type >= 0) ? 0 : -1;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 26/73] net/ntnic: add cat module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (24 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
` (46 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Categorizer module’s main purpose is to is select the behavior
of other modules in the FPGA pipeline depending on a protocol check.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 24 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 267 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 165 +++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 47 +++
.../profile_inline/flow_api_profile_inline.c | 83 ++++++
5 files changed, 586 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 1b45ea4296..87fc16ecb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -315,11 +315,35 @@ int hw_mod_cat_reset(struct flow_api_backend_s *be);
int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
+/* KCE/KCS/FTE KM */
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+/* KCE/KCS/FTE FLM */
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count);
+
int hw_mod_cat_kcc_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_exo_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index d266760123..9164ec1ae0 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -951,6 +951,97 @@ static int hw_mod_cat_fte_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_fte_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_fte_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ const uint32_t key_cnt = (_VER_ >= 20) ? 4 : 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8 * be->cat.nb_flow_types * key_cnt)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v18.fte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v21.fte[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, value, 1);
+}
+
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -964,6 +1055,45 @@ int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cte_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cte_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTE_ENABLE_BM:
+ GET_SET(be->cat.v18.cte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -979,6 +1109,51 @@ int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cts_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cts_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ int addr_size = (be->cat.cts_num + 1) / 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs * addr_size)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTS_CAT_A:
+ GET_SET(be->cat.v18.cts[index].cat_a, value);
+ break;
+
+ case HW_CAT_CTS_CAT_B:
+ GET_SET(be->cat.v18.cts[index].cat_b, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -992,6 +1167,98 @@ int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cot_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cot_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_COT_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->cat.v18.cot[index], (uint8_t)*value,
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->cat.v18.cot, struct cat_v18_cot_s, index, *value);
+ break;
+
+ case HW_CAT_COT_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->cat.v18.cot, struct cat_v18_cot_s, index, *value,
+ be->max_categories);
+ break;
+
+ case HW_CAT_COT_COPY_FROM:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memcpy(&be->cat.v18.cot[index], &be->cat.v18.cot[*value],
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COLOR:
+ GET_SET(be->cat.v18.cot[index].color, value);
+ break;
+
+ case HW_CAT_COT_KM:
+ GET_SET(be->cat.v18.cot[index].km, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cot_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4ea9387c80..addd5f288f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -22,6 +22,14 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
+ /* Items */
+ struct hw_db_inline_resource_db_cat {
+ struct hw_db_inline_cat_data data;
+ int ref;
+ } *cat;
+
+ uint32_t nb_cat;
+
/* Hardware */
struct hw_db_inline_resource_db_cfn {
@@ -47,6 +55,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_cat = ndev->be.cat.nb_cat_funcs;
+ db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
+
+ if (db->cat == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -56,6 +72,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->cat);
free(db->cfn);
@@ -70,6 +87,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_CAT:
+ hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_COT:
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
@@ -80,6 +101,69 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+/******************************************************************************/
+/* Filter */
+/******************************************************************************/
+
+/*
+ * Setup a filter to match:
+ * All packets in CFN checks
+ * All packets in KM
+ * All packets in FLM with look-up C FT equal to specified argument
+ *
+ * Setup a QSL recipe to DROP all matching packets
+ *
+ * Note: QSL recipe 0 uses DISCARD in order to allow for exception paths (UNMQ)
+ * Consequently another QSL recipe with hard DROP is needed
+ */
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id)
+{
+ (void)ft;
+ (void)qsl_hw_id;
+
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+ (void)offset;
+
+ /* Select and enable QSL recipe */
+ if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
+ return -1;
+
+ if (hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6))
+ return -1;
+
+ if (hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0x8))
+ return -1;
+
+ if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ /* Make all CFN checks TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, 0x0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x1))
+ return -1;
+
+ /* Final match: look-up_A == TRUE && look-up_C == TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3))
+ return -1;
+
+ if (hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ return 0;
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -150,3 +234,84 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
db->cot[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* CAT */
+/******************************************************************************/
+
+static int hw_db_inline_cat_compare(const struct hw_db_inline_cat_data *data1,
+ const struct hw_db_inline_cat_data *data2)
+{
+ return data1->vlan_mask == data2->vlan_mask &&
+ data1->mac_port_mask == data2->mac_port_mask &&
+ data1->ptc_mask_frag == data2->ptc_mask_frag &&
+ data1->ptc_mask_l2 == data2->ptc_mask_l2 &&
+ data1->ptc_mask_l3 == data2->ptc_mask_l3 &&
+ data1->ptc_mask_l4 == data2->ptc_mask_l4 &&
+ data1->ptc_mask_tunnel == data2->ptc_mask_tunnel &&
+ data1->ptc_mask_l3_tunnel == data2->ptc_mask_l3_tunnel &&
+ data1->ptc_mask_l4_tunnel == data2->ptc_mask_l4_tunnel &&
+ data1->err_mask_ttl_tunnel == data2->err_mask_ttl_tunnel &&
+ data1->err_mask_ttl == data2->err_mask_ttl && data1->ip_prot == data2->ip_prot &&
+ data1->ip_prot_tunnel == data2->ip_prot_tunnel;
+}
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cat_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_CAT;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ int ref = db->cat[i].ref;
+
+ if (ref > 0 && hw_db_inline_cat_compare(data, &db->cat[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cat_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cat[idx.ids].ref = 1;
+ memcpy(&db->cat[idx.ids].data, data, sizeof(struct hw_db_inline_cat_data));
+
+ return idx;
+}
+
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cat[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cat[idx.ids].ref -= 1;
+
+ if (db->cat[idx.ids].ref <= 0) {
+ memset(&db->cat[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cat_data));
+ db->cat[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 0116af015d..38502ac1ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,12 +36,37 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_cat_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
+ HW_DB_IDX_TYPE_CAT,
};
/* Functionality data types */
+struct hw_db_inline_cat_data {
+ uint32_t vlan_mask : 4;
+ uint32_t mac_port_mask : 8;
+ uint32_t ptc_mask_frag : 4;
+ uint32_t ptc_mask_l2 : 7;
+ uint32_t ptc_mask_l3 : 3;
+ uint32_t ptc_mask_l4 : 5;
+ uint32_t padding0 : 1;
+
+ uint32_t ptc_mask_tunnel : 11;
+ uint32_t ptc_mask_l3_tunnel : 3;
+ uint32_t ptc_mask_l4_tunnel : 5;
+ uint32_t err_mask_ttl_tunnel : 2;
+ uint32_t err_mask_ttl : 2;
+ uint32_t padding1 : 9;
+
+ uint8_t ip_prot;
+ uint8_t ip_prot_tunnel;
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -70,6 +95,16 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ };
+ };
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -84,4 +119,16 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+/**/
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data);
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+
+/**/
+
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9fc4908975..5176464054 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2347,6 +2351,67 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ struct hw_db_inline_action_set_data action_set_data = { 0 };
+ (void)action_set_data;
+
+ if (fd->jump_to_group != UINT32_MAX) {
+ /* Action Set only contains jump */
+ action_set_data.contains_jump = 1;
+ action_set_data.jump = fd->jump_to_group;
+
+ } else {
+ /* Action Set doesn't contain jump */
+ action_set_data.contains_jump = 0;
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = 0,
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
+ &cot_data);
+ fh->db_idxs[fh->db_idx_counter++] = cot_idx.raw;
+ action_set_data.cot = cot_idx;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
+
+ /* Setup CAT */
+ struct hw_db_inline_cat_data cat_data = {
+ .vlan_mask = (0xf << fd->vlans) & 0xf,
+ .mac_port_mask = 1 << fh->port_id,
+ .ptc_mask_frag = fd->fragmentation,
+ .ptc_mask_l2 = fd->l2_prot != -1 ? (1 << fd->l2_prot) : -1,
+ .ptc_mask_l3 = fd->l3_prot != -1 ? (1 << fd->l3_prot) : -1,
+ .ptc_mask_l4 = fd->l4_prot != -1 ? (1 << fd->l4_prot) : -1,
+ .err_mask_ttl = (fd->ttl_sub_enable &&
+ fd->ttl_sub_outer) ? -1 : 0x1,
+ .ptc_mask_tunnel = fd->tunnel_prot !=
+ -1 ? (1 << fd->tunnel_prot) : -1,
+ .ptc_mask_l3_tunnel =
+ fd->tunnel_l3_prot != -1 ? (1 << fd->tunnel_l3_prot) : -1,
+ .ptc_mask_l4_tunnel =
+ fd->tunnel_l4_prot != -1 ? (1 << fd->tunnel_l4_prot) : -1,
+ .err_mask_ttl_tunnel =
+ (fd->ttl_sub_enable && !fd->ttl_sub_outer) ? -1 : 0x1,
+ .ip_prot = fd->ip_prot,
+ .ip_prot_tunnel = fd->tunnel_ip_prot,
+ };
+ struct hw_db_cat_idx cat_idx =
+ hw_db_inline_cat_add(dev->ndev, dev->ndev->hw_db_handle, &cat_data);
+ fh->db_idxs[fh->db_idx_counter++] = cat_idx.raw;
+
+ if (cat_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference CAT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2379,6 +2444,20 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* COT is locked to CFN. Don't set color for CFN 0 */
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+
+ if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ /* Setup filter using matching all packets violating traffic policing parameters */
+ flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+
+ if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE,
+ NT_VIOLATING_MBR_QSL) < 0)
+ goto err_exit0;
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -2413,6 +2492,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PRESET_ALL, 0, 0, 0);
+ hw_mod_cat_cfn_flush(&ndev->be, 0, 1);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+ hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
hw_mod_tpe_reset(&ndev->be);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 27/73] net/ntnic: add SLC LR module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (25 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 26/73] net/ntnic: add cat module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 28/73] net/ntnic: add PDB module Serhii Iliushyk
` (45 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 104 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 19 ++++
.../profile_inline/flow_api_profile_inline.c | 37 ++++++-
5 files changed, 257 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 87fc16ecb4..2711f44083 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -697,6 +697,8 @@ int hw_mod_slc_lr_alloc(struct flow_api_backend_s *be);
void hw_mod_slc_lr_free(struct flow_api_backend_s *be);
int hw_mod_slc_lr_reset(struct flow_api_backend_s *be);
int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value);
struct pdb_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
index 1d878f3f96..30e5e38690 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
@@ -66,3 +66,103 @@ int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int co
return be->iface->slc_lr_rcp_flush(be->be_dev, &be->slc_lr, start_idx, count);
}
+
+static int hw_mod_slc_lr_rcp_mod(struct flow_api_backend_s *be, enum hw_slc_lr_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 2:
+ switch (field) {
+ case HW_SLC_LR_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->slc_lr.v2.rcp[index], (uint8_t)*value,
+ sizeof(struct hw_mod_slc_lr_v2_s));
+ break;
+
+ case HW_SLC_LR_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value, be->max_categories);
+ break;
+
+ case HW_SLC_LR_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].head_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].tail_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_PCAP:
+ GET_SET(be->slc_lr.v2.rcp[index].pcap, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_slc_lr_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index addd5f288f..b17bce3745 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,7 +20,13 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_slc_lr {
+ struct hw_db_inline_slc_lr_data data;
+ int ref;
+ } *slc_lr;
+
uint32_t nb_cot;
+ uint32_t nb_slc_lr;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -55,6 +61,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_slc_lr = ndev->be.max_categories;
+ db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
+
+ if (db->slc_lr == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -72,6 +86,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->slc_lr);
free(db->cat);
free(db->cfn);
@@ -95,6 +110,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_SLC_LR:
+ hw_db_inline_slc_lr_deref(ndev, db_handle,
+ *(struct hw_db_slc_lr_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -235,6 +255,90 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* SLC_LR */
+/******************************************************************************/
+
+static int hw_db_inline_slc_lr_compare(const struct hw_db_inline_slc_lr_data *data1,
+ const struct hw_db_inline_slc_lr_data *data2)
+{
+ if (!data1->head_slice_en)
+ return data1->head_slice_en == data2->head_slice_en;
+
+ return data1->head_slice_en == data2->head_slice_en &&
+ data1->head_slice_dyn == data2->head_slice_dyn &&
+ data1->head_slice_ofs == data2->head_slice_ofs;
+}
+
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_slc_lr_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_SLC_LR;
+
+ for (uint32_t i = 1; i < db->nb_slc_lr; ++i) {
+ int ref = db->slc_lr[i].ref;
+
+ if (ref > 0 && hw_db_inline_slc_lr_compare(data, &db->slc_lr[i].data)) {
+ idx.ids = i;
+ hw_db_inline_slc_lr_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->slc_lr[idx.ids].ref = 1;
+ memcpy(&db->slc_lr[idx.ids].data, data, sizeof(struct hw_db_inline_slc_lr_data));
+
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_SLC_EN, idx.ids, data->head_slice_en);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_DYN, idx.ids, data->head_slice_dyn);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_OFS, idx.ids, data->head_slice_ofs);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->slc_lr[idx.ids].ref += 1;
+}
+
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->slc_lr[idx.ids].ref -= 1;
+
+ if (db->slc_lr[idx.ids].ref <= 0) {
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->slc_lr[idx.ids].data, 0x0, sizeof(struct hw_db_inline_slc_lr_data));
+ db->slc_lr[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 38502ac1ec..ef63336b1c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -40,10 +40,15 @@ struct hw_db_cat_idx {
HW_DB_IDX;
};
+struct hw_db_slc_lr_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_SLC_LR,
};
/* Functionality data types */
@@ -89,6 +94,13 @@ struct hw_db_inline_cot_data {
uint32_t padding : 24;
};
+struct hw_db_inline_slc_lr_data {
+ uint32_t head_slice_en : 1;
+ uint32_t head_slice_dyn : 5;
+ uint32_t head_slice_ofs : 8;
+ uint32_t padding : 18;
+};
+
struct hw_db_inline_hsh_data {
uint32_t func;
uint64_t hash_mask;
@@ -119,6 +131,13 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data);
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5176464054..73fab083de 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2277,18 +2277,38 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
-static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
- const struct nic_flow_def *fd __rte_unused,
+static int setup_flow_flm_actions(struct flow_eth_dev *dev,
+ const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
- uint32_t local_idxs[] __rte_unused,
- uint32_t *local_idx_counter __rte_unused,
+ uint32_t local_idxs[],
+ uint32_t *local_idx_counter,
uint16_t *flm_rpl_ext_ptr __rte_unused,
uint32_t *flm_ft __rte_unused,
uint32_t *flm_scrub __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error)
{
+ /* Setup SLC LR */
+ struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
+
+ if (fd->header_strip_end_dyn != 0 || fd->header_strip_end_ofs != 0) {
+ struct hw_db_inline_slc_lr_data slc_lr_data = {
+ .head_slice_en = 1,
+ .head_slice_dyn = fd->header_strip_end_dyn,
+ .head_slice_ofs = fd->header_strip_end_ofs,
+ };
+ slc_lr_idx =
+ hw_db_inline_slc_lr_add(dev->ndev, dev->ndev->hw_db_handle, &slc_lr_data);
+ local_idxs[(*local_idx_counter)++] = slc_lr_idx.raw;
+
+ if (slc_lr_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference SLC LR resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -2450,6 +2470,9 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* SLC LR index 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2498,6 +2521,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
+
hw_mod_tpe_reset(&ndev->be);
flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 28/73] net/ntnic: add PDB module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (26 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 29/73] net/ntnic: add QSL module Serhii Iliushyk
` (44 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Packet Description Builder module creates packet meta-data
for example virtio-net headers.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 17 +++
3 files changed, 164 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 2711f44083..7f1449d8ee 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -740,6 +740,9 @@ int hw_mod_pdb_alloc(struct flow_api_backend_s *be);
void hw_mod_pdb_free(struct flow_api_backend_s *be);
int hw_mod_pdb_reset(struct flow_api_backend_s *be);
int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value);
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be);
struct tpe_func_s {
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
index c3facacb08..59285405ba 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
@@ -85,6 +85,150 @@ int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->pdb_rcp_flush(be->be_dev, &be->pdb, start_idx, count);
}
+static int hw_mod_pdb_rcp_mod(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 9:
+ switch (field) {
+ case HW_PDB_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->pdb.v9.rcp[index], (uint8_t)*value,
+ sizeof(struct pdb_v9_rcp_s));
+ break;
+
+ case HW_PDB_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value,
+ be->pdb.nb_pdb_rcp_categories);
+ break;
+
+ case HW_PDB_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value);
+ break;
+
+ case HW_PDB_RCP_DESCRIPTOR:
+ GET_SET(be->pdb.v9.rcp[index].descriptor, value);
+ break;
+
+ case HW_PDB_RCP_DESC_LEN:
+ GET_SET(be->pdb.v9.rcp[index].desc_len, value);
+ break;
+
+ case HW_PDB_RCP_TX_PORT:
+ GET_SET(be->pdb.v9.rcp[index].tx_port, value);
+ break;
+
+ case HW_PDB_RCP_TX_IGNORE:
+ GET_SET(be->pdb.v9.rcp[index].tx_ignore, value);
+ break;
+
+ case HW_PDB_RCP_TX_NOW:
+ GET_SET(be->pdb.v9.rcp[index].tx_now, value);
+ break;
+
+ case HW_PDB_RCP_CRC_OVERWRITE:
+ GET_SET(be->pdb.v9.rcp[index].crc_overwrite, value);
+ break;
+
+ case HW_PDB_RCP_ALIGN:
+ GET_SET(be->pdb.v9.rcp[index].align, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs0_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs0_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs1_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs1_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs2_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs2_rel, value);
+ break;
+
+ case HW_PDB_RCP_IP_PROT_TNL:
+ GET_SET(be->pdb.v9.rcp[index].ip_prot_tnl, value);
+ break;
+
+ case HW_PDB_RCP_PPC_HSH:
+ GET_SET(be->pdb.v9.rcp[index].ppc_hsh, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_EN:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_en, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_BIT:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_bit, value);
+ break;
+
+ case HW_PDB_RCP_PCAP_KEEP_FCS:
+ GET_SET(be->pdb.v9.rcp[index].pcap_keep_fcs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 9 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_pdb_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be)
{
return be->iface->pdb_config_flush(be->be_dev, &be->pdb);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 73fab083de..1eab579142 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2473,6 +2473,19 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ /* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
+ */
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESCRIPTOR, 0, 7) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESC_LEN, 0, 6) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2530,6 +2543,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+ hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_PRESET_ALL, 0, 0);
+ hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 29/73] net/ntnic: add QSL module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (27 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 28/73] net/ntnic: add PDB module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 30/73] net/ntnic: add KM module Serhii Iliushyk
` (43 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Queue Selector module directs packets to a given destination
which includes host queues, physical ports, exceptions paths, and discard.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/hw_mod_backend.h | 8 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 65 ++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 96 +++++++-
7 files changed, 595 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 7f031ccda8..edffd0a57a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -184,8 +184,11 @@ extern const char *dbg_res_descr[];
int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
uint32_t alignment);
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment);
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
#endif
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7f1449d8ee..6fa2a3d94f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -666,8 +666,16 @@ int hw_mod_qsl_alloc(struct flow_api_backend_s *be);
void hw_mod_qsl_free(struct flow_api_backend_s *be);
int hw_mod_qsl_reset(struct flow_api_backend_s *be);
int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value);
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_qsl_unmq_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
uint32_t value);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 2aee2ee973..a51d621ef9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -106,11 +106,52 @@ int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return -1;
}
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment)
+{
+ unsigned int idx_offs;
+
+ for (unsigned int res_idx = 0; res_idx < ndev->res[res_type].resource_count - (num - 1);
+ res_idx += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, res_idx)) {
+ for (idx_offs = 1; idx_offs < num; idx_offs++)
+ if (flow_nic_is_resource_used(ndev, res_type, res_idx + idx_offs))
+ break;
+
+ if (idx_offs < num)
+ continue;
+
+ /* found a contiguous number of "num" res_type elements - allocate them */
+ for (idx_offs = 0; idx_offs < num; idx_offs++) {
+ flow_nic_mark_resource_used(ndev, res_type, res_idx + idx_offs);
+ ndev->res[res_type].ref[res_idx + idx_offs] = 1;
+ }
+
+ return res_idx;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
}
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
+{
+ NT_LOG(DBG, FILTER, "Reference resource %s idx %i (before ref cnt %i)",
+ dbg_res_descr[res_type], index, ndev->res[res_type].ref[index]);
+ assert(flow_nic_is_resource_used(ndev, res_type, index));
+
+ if (ndev->res[res_type].ref[index] == (uint32_t)-1)
+ return -1;
+
+ ndev->res[res_type].ref[index]++;
+ return 0;
+}
+
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
{
NT_LOG(DBG, FILTER, "De-reference resource %s idx %i (before ref cnt %i)",
@@ -348,6 +389,18 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 0);
hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1);
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (int i = 0; i < eth_dev->num_queues; ++i) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value & ~(1U << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
#ifdef FLOW_DEBUG
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
@@ -580,6 +633,18 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->rss_target_id = -1;
+ if (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (i = 0; i < eth_dev->num_queues; i++) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value | (1 << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
*rss_target_id = eth_dev->rss_target_id;
nic_insert_eth_port_dev(ndev, eth_dev);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
index 93b37d595e..70fe97a298 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
@@ -104,6 +104,114 @@ int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_rcp_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_rcp_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.rcp[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_rcp_s));
+ break;
+
+ case HW_QSL_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value,
+ be->qsl.nb_rcp_categories);
+ break;
+
+ case HW_QSL_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value);
+ break;
+
+ case HW_QSL_RCP_DISCARD:
+ GET_SET(be->qsl.v7.rcp[index].discard, value);
+ break;
+
+ case HW_QSL_RCP_DROP:
+ GET_SET(be->qsl.v7.rcp[index].drop, value);
+ break;
+
+ case HW_QSL_RCP_TBL_LO:
+ GET_SET(be->qsl.v7.rcp[index].tbl_lo, value);
+ break;
+
+ case HW_QSL_RCP_TBL_HI:
+ GET_SET(be->qsl.v7.rcp[index].tbl_hi, value);
+ break;
+
+ case HW_QSL_RCP_TBL_IDX:
+ GET_SET(be->qsl.v7.rcp[index].tbl_idx, value);
+ break;
+
+ case HW_QSL_RCP_TBL_MSK:
+ GET_SET(be->qsl.v7.rcp[index].tbl_msk, value);
+ break;
+
+ case HW_QSL_RCP_LR:
+ GET_SET(be->qsl.v7.rcp[index].lr, value);
+ break;
+
+ case HW_QSL_RCP_TSA:
+ GET_SET(be->qsl.v7.rcp[index].tsa, value);
+ break;
+
+ case HW_QSL_RCP_VLI:
+ GET_SET(be->qsl.v7.rcp[index].vli, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -117,6 +225,73 @@ int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qst_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qst_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_qst_entries) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.qst[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_qst_s));
+ break;
+
+ case HW_QSL_QST_QUEUE:
+ GET_SET(be->qsl.v7.qst[index].queue, value);
+ break;
+
+ case HW_QSL_QST_EN:
+ GET_SET(be->qsl.v7.qst[index].en, value);
+ break;
+
+ case HW_QSL_QST_TX_PORT:
+ GET_SET(be->qsl.v7.qst[index].tx_port, value);
+ break;
+
+ case HW_QSL_QST_LRE:
+ GET_SET(be->qsl.v7.qst[index].lre, value);
+ break;
+
+ case HW_QSL_QST_TCI:
+ GET_SET(be->qsl.v7.qst[index].tci, value);
+ break;
+
+ case HW_QSL_QST_VEN:
+ GET_SET(be->qsl.v7.qst[index].ven, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -130,6 +305,49 @@ int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qen_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qen_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= QSL_QEN_ENTRIES) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QEN_EN:
+ GET_SET(be->qsl.v7.qen[index].en, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, &value, 0);
+}
+
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, value, 1);
+}
+
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b17bce3745..5572662647 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,12 +20,18 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_qsl {
+ struct hw_db_inline_qsl_data data;
+ int qst_idx;
+ } *qsl;
+
struct hw_db_inline_resource_db_slc_lr {
struct hw_db_inline_slc_lr_data data;
int ref;
} *slc_lr;
uint32_t nb_cot;
+ uint32_t nb_qsl;
uint32_t nb_slc_lr;
/* Items */
@@ -61,6 +67,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_qsl = ndev->be.qsl.nb_rcp_categories;
+ db->qsl = calloc(db->nb_qsl, sizeof(struct hw_db_inline_resource_db_qsl));
+
+ if (db->qsl == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_slc_lr = ndev->be.max_categories;
db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
@@ -86,6 +100,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->qsl);
free(db->slc_lr);
free(db->cat);
@@ -110,6 +125,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_QSL:
+ hw_db_inline_qsl_deref(ndev, db_handle, *(struct hw_db_qsl_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_SLC_LR:
hw_db_inline_slc_lr_deref(ndev, db_handle,
*(struct hw_db_slc_lr_idx *)&idxs[i]);
@@ -145,6 +164,13 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
+ /* QSL for traffic policing */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_hw_id, 0x3) < 0)
+ return -1;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, qsl_hw_id, 1) < 0)
+ return -1;
+
/* Select and enable QSL recipe */
if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
return -1;
@@ -255,6 +281,175 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* QSL */
+/******************************************************************************/
+
+/* Calculate queue mask for QSL TBL_MSK for given number of queues.
+ * NOTE: If number of queues is not power of two, then queue mask will be created
+ * for nearest smaller power of two.
+ */
+static uint32_t queue_mask(uint32_t nr_queues)
+{
+ nr_queues |= nr_queues >> 1;
+ nr_queues |= nr_queues >> 2;
+ nr_queues |= nr_queues >> 4;
+ nr_queues |= nr_queues >> 8;
+ nr_queues |= nr_queues >> 16;
+ return nr_queues >> 1;
+}
+
+static int hw_db_inline_qsl_compare(const struct hw_db_inline_qsl_data *data1,
+ const struct hw_db_inline_qsl_data *data2)
+{
+ if (data1->discard != data2->discard || data1->drop != data2->drop ||
+ data1->table_size != data2->table_size || data1->retransmit != data2->retransmit) {
+ return 0;
+ }
+
+ for (int i = 0; i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ if (data1->table[i].queue != data2->table[i].queue ||
+ data1->table[i].queue_en != data2->table[i].queue_en ||
+ data1->table[i].tx_port != data2->table[i].tx_port ||
+ data1->table[i].tx_port_en != data2->table[i].tx_port_en) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_qsl_idx qsl_idx = { .raw = 0 };
+ uint32_t qst_idx = 0;
+ int res;
+
+ qsl_idx.type = HW_DB_IDX_TYPE_QSL;
+
+ if (data->discard) {
+ qsl_idx.ids = 0;
+ return qsl_idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_qsl; ++i) {
+ if (hw_db_inline_qsl_compare(data, &db->qsl[i].data)) {
+ qsl_idx.ids = i;
+ hw_db_inline_qsl_ref(ndev, db, qsl_idx);
+ return qsl_idx;
+ }
+ }
+
+ res = flow_nic_alloc_resource(ndev, RES_QSL_RCP, 1);
+
+ if (res < 0) {
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qsl_idx.ids = res & 0xff;
+
+ if (data->table_size > 0) {
+ res = flow_nic_alloc_resource_config(ndev, RES_QSL_QST, data->table_size, 1);
+
+ if (res < 0) {
+ flow_nic_deref_resource(ndev, RES_QSL_RCP, qsl_idx.ids);
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qst_idx = (uint32_t)res;
+ }
+
+ memcpy(&db->qsl[qsl_idx.ids].data, data, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[qsl_idx.ids].qst_idx = qst_idx;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, qsl_idx.ids, 0x0);
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, qsl_idx.ids, data->discard);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_idx.ids, data->drop * 0x3);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_LR, qsl_idx.ids, data->retransmit * 0x3);
+
+ if (data->table_size == 0) {
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, 0x0);
+
+ } else {
+ const uint32_t table_start = qst_idx;
+ const uint32_t table_end = table_start + data->table_size - 1;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, table_end);
+
+ /* Toeplitz hash function uses TBL_IDX and TBL_MSK. */
+ uint32_t msk = queue_mask(table_end - table_start + 1);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, msk);
+
+ for (uint32_t i = 0; i < data->table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, table_start + i, 0x0);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_TX_PORT, table_start + i,
+ data->table[i].tx_port);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_LRE, table_start + i,
+ data->table[i].tx_port_en);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_QUEUE, table_start + i,
+ data->table[i].queue);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_EN, table_start + i,
+ data->table[i].queue_en);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, data->table_size);
+ }
+
+ hw_mod_qsl_rcp_flush(&ndev->be, qsl_idx.ids, 1);
+
+ return qsl_idx;
+}
+
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ (void)db_handle;
+
+ if (!idx.error && idx.ids != 0)
+ flow_nic_ref_resource(ndev, RES_QSL_RCP, idx.ids);
+}
+
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error || idx.ids == 0)
+ return;
+
+ if (flow_nic_deref_resource(ndev, RES_QSL_RCP, idx.ids) == 0) {
+ const int table_size = (int)db->qsl[idx.ids].data.table_size;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_qsl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ if (table_size > 0) {
+ const int table_start = db->qsl[idx.ids].qst_idx;
+
+ for (int i = 0; i < (int)table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL,
+ table_start + i, 0x0);
+ flow_nic_free_resource(ndev, RES_QSL_QST, table_start + i);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, table_size);
+ }
+
+ memset(&db->qsl[idx.ids].data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[idx.ids].qst_idx = 0;
+ }
+}
+
/******************************************************************************/
/* SLC_LR */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index ef63336b1c..d0435acaef 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,6 +36,10 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_qsl_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cat_idx {
HW_DB_IDX;
};
@@ -48,6 +52,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
};
@@ -113,6 +118,7 @@ struct hw_db_inline_action_set_data {
int jump;
struct {
struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
};
};
};
@@ -131,6 +137,11 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data);
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+
struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_slc_lr_data *data);
void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1eab579142..6d72f8d99b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2277,9 +2277,55 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
+
+static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_data *qsl_data,
+ uint32_t num_dest_port, uint32_t num_queues)
+{
+ memset(qsl_data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+
+ if (fd->dst_num_avail <= 0) {
+ qsl_data->drop = 1;
+
+ } else {
+ assert(fd->dst_num_avail < HW_DB_INLINE_MAX_QST_PER_QSL);
+
+ uint32_t ports[fd->dst_num_avail];
+ uint32_t queues[fd->dst_num_avail];
+
+ uint32_t port_index = 0;
+ uint32_t queue_index = 0;
+ uint32_t max = num_dest_port > num_queues ? num_dest_port : num_queues;
+
+ memset(ports, 0, fd->dst_num_avail);
+ memset(queues, 0, fd->dst_num_avail);
+
+ qsl_data->table_size = max;
+ qsl_data->retransmit = num_dest_port > 0 ? 1 : 0;
+
+ for (int i = 0; i < fd->dst_num_avail; ++i)
+ if (fd->dst_id[i].type == PORT_PHY)
+ ports[port_index++] = fd->dst_id[i].id;
+
+ else if (fd->dst_id[i].type == PORT_VIRT)
+ queues[queue_index++] = fd->dst_id[i].id;
+
+ for (uint32_t i = 0; i < max; ++i) {
+ if (num_dest_port > 0) {
+ qsl_data->table[i].tx_port = ports[i % num_dest_port];
+ qsl_data->table[i].tx_port_en = 1;
+ }
+
+ if (num_queues > 0) {
+ qsl_data->table[i].queue = queues[i % num_queues];
+ qsl_data->table[i].queue_en = 1;
+ }
+ }
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
- const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
uint32_t local_idxs[],
@@ -2289,6 +2335,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
+ local_idxs[(*local_idx_counter)++] = qsl_idx.raw;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2329,6 +2386,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
fh->caller_id = caller_id;
struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
@@ -2399,6 +2457,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle,
+ &qsl_data);
+ fh->db_idxs[fh->db_idx_counter++] = qsl_idx.raw;
+ action_set_data.qsl = qsl_idx;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2470,6 +2541,24 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* Initialize QSL with unmatched recipe index 0 - discard */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, 0, 0x1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, 0);
+
+ /* Initialize QST with default index 0 */
+ if (hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, 0, 0x0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_qst_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
+
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
@@ -2488,6 +2577,7 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
NT_FLM_VIOLATING_MBR_FLOW_TYPE,
@@ -2534,6 +2624,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, 0, 0);
+ hw_mod_qsl_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_QSL_RCP, 0);
+
hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 30/73] net/ntnic: add KM module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (28 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 29/73] net/ntnic: add QSL module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 31/73] net/ntnic: add hash API Serhii Iliushyk
` (42 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Key Matcher module checks the values of individual fields of a packet.
It supports both exact match which is implemented with a CAM,
and wildcards which is implemented with a TCAM.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 110 +-
drivers/net/ntnic/include/hw_mod_backend.h | 64 +-
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1065 +++++++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++
.../profile_inline/flow_api_hw_db_inline.h | 38 +
.../profile_inline/flow_api_profile_inline.c | 162 +++
7 files changed, 2024 insertions(+), 29 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b1d39b919b..a0f02f4e8a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -52,34 +52,32 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_WORD_NUM 24
+#define MAX_BANKS 6
+
+#define MAX_TCAM_START_OFFSETS 4
+
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
/*
- * Tunnel encapsulation header definition
+ * 128 128 32 32 32
+ * Have | QW0 || QW4 || SW8 || SW9 | SWX in FPGA
+ *
+ * Each word may start at any offset, though
+ * they are combined in chronological order, with all enabled to
+ * build the extracted match data, thus that is how the match key
+ * must be build
*/
-#define MAX_TUN_HDR_SIZE 128
-struct tunnel_header_s {
- union {
- uint8_t hdr8[MAX_TUN_HDR_SIZE];
- uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
- } d;
- uint32_t user_port_id;
- uint8_t len;
-
- uint8_t nb_vlans;
-
- uint8_t ip_version; /* 4: v4, 6: v6 */
- uint16_t ip_csum_precalc;
-
- uint8_t new_outer;
- uint8_t l2_len;
- uint8_t l3_len;
- uint8_t l4_len;
+enum extractor_e {
+ KM_USE_EXTRACTOR_UNDEF,
+ KM_USE_EXTRACTOR_QWORD,
+ KM_USE_EXTRACTOR_SWORD,
};
struct match_elem_s {
+ enum extractor_e extr;
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
uint32_t e_mask[4];
@@ -89,16 +87,76 @@ struct match_elem_s {
uint32_t word_len;
};
+enum cam_tech_use_e {
+ KM_CAM,
+ KM_TCAM,
+ KM_SYNERGY
+};
+
struct km_flow_def_s {
struct flow_api_backend_s *be;
+ /* For keeping track of identical entries */
+ struct km_flow_def_s *reference;
+ struct km_flow_def_s *root;
+
/* For collect flow elements and sorting */
struct match_elem_s match[MAX_MATCH_FIELDS];
+ struct match_elem_s *match_map[MAX_MATCH_FIELDS];
int num_ftype_elem;
+ /* Finally formatted CAM/TCAM entry */
+ enum cam_tech_use_e target;
+ uint32_t entry_word[MAX_WORD_NUM];
+ uint32_t entry_mask[MAX_WORD_NUM];
+ int key_word_size;
+
+ /* TCAM calculated possible bank start offsets */
+ int start_offsets[MAX_TCAM_START_OFFSETS];
+ int num_start_offsets;
+
/* Flow information */
/* HW input port ID needed for compare. In port must be identical on flow types */
uint32_t port_id;
+ uint32_t info; /* used for color (actions) */
+ int info_set;
+ int flow_type; /* 0 is illegal and used as unset */
+ int flushed_to_target; /* if this km entry has been finally programmed into NIC hw */
+
+ /* CAM specific bank management */
+ int cam_paired;
+ int record_indexes[MAX_BANKS];
+ int bank_used;
+ uint32_t *cuckoo_moves; /* for CAM statistics only */
+ struct cam_distrib_s *cam_dist;
+
+ /* TCAM specific bank management */
+ struct tcam_distrib_s *tcam_dist;
+ int tcam_start_bank;
+ int tcam_record;
+};
+
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
};
enum flow_port_type_e {
@@ -247,11 +305,25 @@ struct flow_handle {
};
};
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
uint32_t word_len, enum frame_offs_e start, int8_t offset);
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id);
+/*
+ * Compares 2 KM key definitions after first collect validate and optimization.
+ * km is compared against an existing km1.
+ * if identical, km1 flow_type is returned
+ */
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1);
+
+int km_rcp_set(struct km_flow_def_s *km, int index);
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color);
+int km_clear_data_match_entry(struct km_flow_def_s *km);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6fa2a3d94f..26903f2183 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -132,6 +132,22 @@ static inline int is_non_zero(const void *addr, size_t n)
return 0;
}
+/* Sideband info bit indicator */
+#define SWX_INFO (1 << 6)
+
+enum km_flm_if_select_e {
+ KM_FLM_IF_FIRST = 0,
+ KM_FLM_IF_SECOND = 1
+};
+
+#define FIELD_START_INDEX 100
+
+#define COMMON_FUNC_INFO_S \
+ int ver; \
+ void *base; \
+ unsigned int alloced_size; \
+ int debug
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
@@ -141,22 +157,39 @@ enum frame_offs_e {
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ SB_VNI = SWX_INFO | 1,
+ SB_MAC_PORT = SWX_INFO | 2,
+ SB_KCC_ID = SWX_INFO | 3
};
-/* Sideband info bit indicator */
+enum {
+ QW0_SEL_EXCLUDE = 0,
+ QW0_SEL_FIRST32 = 1,
+ QW0_SEL_FIRST64 = 3,
+ QW0_SEL_ALL128 = 4,
+};
-enum km_flm_if_select_e {
- KM_FLM_IF_FIRST = 0,
- KM_FLM_IF_SECOND = 1
+enum {
+ QW4_SEL_EXCLUDE = 0,
+ QW4_SEL_FIRST32 = 1,
+ QW4_SEL_FIRST64 = 2,
+ QW4_SEL_ALL128 = 3,
};
-#define FIELD_START_INDEX 100
+enum {
+ DW8_SEL_EXCLUDE = 0,
+ DW8_SEL_FIRST32 = 3,
+};
-#define COMMON_FUNC_INFO_S \
- int ver; \
- void *base; \
- unsigned int alloced_size; \
- int debug
+enum {
+ DW10_SEL_EXCLUDE = 0,
+ DW10_SEL_FIRST32 = 2,
+};
+
+enum {
+ SWX_SEL_EXCLUDE = 0,
+ SWX_SEL_ALL32 = 1,
+};
enum {
PROT_OTHER = 0,
@@ -440,13 +473,24 @@ int hw_mod_km_alloc(struct flow_api_backend_s *be);
void hw_mod_km_free(struct flow_api_backend_s *be);
int hw_mod_km_reset(struct flow_api_backend_s *be);
int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value);
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value);
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count);
int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
int byte_val, uint32_t *value_set);
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set);
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 237e9f7b4e..30d6ea728e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -10,8 +10,34 @@
#include "flow_api_engine.h"
#include "nt_util.h"
+#define MAX_QWORDS 2
+#define MAX_SWORDS 2
+
+#define CUCKOO_MOVE_MAX_DEPTH 8
+
#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+#define CAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_cam_records + (rec))
+#define CAM_KM_DIST_IDX(bnk) \
+ ({ \
+ int _temp_bnk = (bnk); \
+ CAM_DIST_IDX(_temp_bnk, km->record_indexes[_temp_bnk]); \
+ })
+
+#define TCAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_tcam_bank_width + (rec))
+
+#define CAM_ENTRIES \
+ (km->be->km.nb_cam_banks * km->be->km.nb_cam_records * sizeof(struct cam_distrib_s))
+#define TCAM_ENTRIES \
+ (km->be->km.nb_tcam_bank_width * km->be->km.nb_tcam_banks * sizeof(struct tcam_distrib_s))
+
+/*
+ * CAM structures and defines
+ */
+struct cam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
static const struct cam_match_masks_s {
uint32_t word_len;
uint32_t key_mask[4];
@@ -36,6 +62,25 @@ static const struct cam_match_masks_s {
{ 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
};
+static int cam_addr_reserved_stack[CUCKOO_MOVE_MAX_DEPTH];
+
+/*
+ * TCAM structures and defines
+ */
+struct tcam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
+static int tcam_find_mapping(struct km_flow_def_s *km);
+
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
+{
+ km->cam_dist = (struct cam_distrib_s *)*handle;
+ km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
+ km->tcam_dist =
+ (struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+}
+
void km_free_ndev_resource_management(void **handle)
{
if (*handle) {
@@ -98,3 +143,1023 @@ int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_m
km->num_ftype_elem++;
return 0;
}
+
+static int get_word(struct km_flow_def_s *km, uint32_t size, int marked[])
+{
+ for (int i = 0; i < km->num_ftype_elem; i++)
+ if (!marked[i] && !(km->match[i].extr_start_offs_id & SWX_INFO) &&
+ km->match[i].word_len == size)
+ return i;
+
+ return -1;
+}
+
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id)
+{
+ /*
+ * Create combined extractor mappings
+ * if key fields may be changed to cover un-mappable otherwise?
+ * split into cam and tcam and use synergy mode when available
+ */
+ int match_marked[MAX_MATCH_FIELDS];
+ int idx = 0;
+ int next = 0;
+ int m_idx;
+ int size;
+
+ memset(match_marked, 0, sizeof(match_marked));
+
+ /* build QWords */
+ for (int qwords = 0; qwords < MAX_QWORDS; qwords++) {
+ size = 4;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 2;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 1;
+ m_idx = get_word(km, 1, match_marked);
+ }
+ }
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_QWORD;
+
+ /* build final entry words and mask array */
+ for (int i = 0; i < size; i++) {
+ km->entry_word[idx + i] = km->match[m_idx].e_word[i];
+ km->entry_mask[idx + i] = km->match[m_idx].e_mask[i];
+ }
+
+ idx += size;
+ next++;
+ }
+
+ m_idx = get_word(km, 4, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more QWords */
+ return -1;
+ }
+
+ /*
+ * On km v6+ we have DWORDs here instead. However, we only use them as SWORDs for now
+ * No match would be able to exploit these as DWORDs because of maximum length of 12 words
+ * in CAM The last 2 words are taken by KCC-ID/SWX and Color. You could have one or none
+ * QWORDs where then both these DWORDs were possible in 10 words, but we don't have such
+ * use case built in yet
+ */
+ /* build SWords */
+ for (int swords = 0; swords < MAX_SWORDS; swords++) {
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_SWORD;
+
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[m_idx].e_word[0];
+ km->entry_mask[idx] = km->match[m_idx].e_mask[0];
+ idx++;
+ next++;
+ }
+
+ /*
+ * Make sure we took them all
+ */
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more SWords */
+ return -1;
+ }
+
+ /*
+ * Handle SWX words specially
+ */
+ int swx_found = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id & SWX_INFO) {
+ km->match_map[next] = &km->match[i];
+ km->match[i].extr = KM_USE_EXTRACTOR_SWORD;
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[i].e_word[0];
+ km->entry_mask[idx] = km->match[i].e_mask[0];
+ idx++;
+ next++;
+ swx_found = 1;
+ }
+ }
+
+ assert(next == km->num_ftype_elem);
+
+ km->key_word_size = idx;
+ km->port_id = port_id;
+
+ km->target = KM_CAM;
+
+ /*
+ * Finally decide if we want to put this match->action into the TCAM
+ * When SWX word used we need to put it into CAM always, no matter what mask pattern
+ * Later, when synergy mode is applied, we can do a split
+ */
+ if (!swx_found && km->key_word_size <= 6) {
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match_map[i]->masked_for_tcam) {
+ /* At least one */
+ km->target = KM_TCAM;
+ }
+ }
+ }
+
+ NT_LOG(DBG, FILTER, "This flow goes into %s", (km->target == KM_TCAM) ? "TCAM" : "CAM");
+
+ if (km->target == KM_TCAM) {
+ if (km->key_word_size > 10) {
+ /* do not support SWX in TCAM */
+ return -1;
+ }
+
+ /*
+ * adjust for unsupported key word size in TCAM
+ */
+ if ((km->key_word_size == 5 || km->key_word_size == 7 || km->key_word_size == 9)) {
+ km->entry_mask[km->key_word_size] = 0;
+ km->key_word_size++;
+ }
+
+ /*
+ * 1. the fact that the length of a key cannot change among the same used banks
+ *
+ * calculate possible start indexes
+ * unfortunately restrictions in TCAM lookup
+ * makes it hard to handle key lengths larger than 6
+ * when other sizes should be possible too
+ */
+ switch (km->key_word_size) {
+ case 1:
+ for (int i = 0; i < 4; i++)
+ km->start_offsets[0] = 8 + i;
+
+ km->num_start_offsets = 4;
+ break;
+
+ case 2:
+ km->start_offsets[0] = 6;
+ km->num_start_offsets = 1;
+ break;
+
+ case 3:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 4:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 6:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Final Key word size too large: %i",
+ km->key_word_size);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1)
+{
+ if (km->target != km1->target || km->num_ftype_elem != km1->num_ftype_elem ||
+ km->key_word_size != km1->key_word_size || km->info_set != km1->info_set)
+ return 0;
+
+ /*
+ * before KCC-CAM:
+ * if port is added to match, then we can have different ports in CAT
+ * that reuses this flow type
+ */
+ int port_match_included = 0, kcc_swx_used = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id == SB_MAC_PORT) {
+ port_match_included = 1;
+ break;
+ }
+
+ if (km->match_map[i]->extr_start_offs_id == SB_KCC_ID) {
+ kcc_swx_used = 1;
+ break;
+ }
+ }
+
+ /*
+ * If not using KCC and if port match is not included in CAM,
+ * we need to have same port_id to reuse
+ */
+ if (!kcc_swx_used && !port_match_included && km->port_id != km1->port_id)
+ return 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ /* using same extractor types in same sequence */
+ if (km->match_map[i]->extr_start_offs_id !=
+ km1->match_map[i]->extr_start_offs_id ||
+ km->match_map[i]->rel_offs != km1->match_map[i]->rel_offs ||
+ km->match_map[i]->extr != km1->match_map[i]->extr ||
+ km->match_map[i]->word_len != km1->match_map[i]->word_len) {
+ return 0;
+ }
+ }
+
+ if (km->target == KM_CAM) {
+ /* in CAM must exactly match on all masks */
+ for (int i = 0; i < km->key_word_size; i++)
+ if (km->entry_mask[i] != km1->entry_mask[i])
+ return 0;
+
+ /* Would be set later if not reusing from km1 */
+ km->cam_paired = km1->cam_paired;
+
+ } else if (km->target == KM_TCAM) {
+ /*
+ * If TCAM, we must make sure Recipe Key Mask does not
+ * mask out enable bits in masks
+ * Note: it is important that km1 is the original creator
+ * of the KM Recipe, since it contains its true masks
+ */
+ for (int i = 0; i < km->key_word_size; i++)
+ if ((km->entry_mask[i] & km1->entry_mask[i]) != km->entry_mask[i])
+ return 0;
+
+ km->tcam_start_bank = km1->tcam_start_bank;
+ km->tcam_record = -1; /* needs to be found later */
+
+ } else {
+ NT_LOG(DBG, FILTER, "ERROR - KM target not defined or supported");
+ return 0;
+ }
+
+ /*
+ * Check for a flow clash. If already programmed return with -1
+ */
+ int double_match = 1;
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ if ((km->entry_word[i] & km->entry_mask[i]) !=
+ (km1->entry_word[i] & km1->entry_mask[i])) {
+ double_match = 0;
+ break;
+ }
+ }
+
+ if (double_match)
+ return -1;
+
+ /*
+ * Note that TCAM and CAM may reuse same RCP and flow type
+ * when this happens, CAM entry wins on overlap
+ */
+
+ /* Use same KM Recipe and same flow type - return flow type */
+ return km1->flow_type;
+}
+
+int km_rcp_set(struct km_flow_def_s *km, int index)
+{
+ int qw = 0;
+ int sw = 0;
+ int swx = 0;
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PRESET_ALL, index, 0, 0);
+
+ /* set extractor words, offs, contrib */
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ switch (km->match_map[i]->extr) {
+ case KM_USE_EXTRACTOR_SWORD:
+ if (km->match_map[i]->extr_start_offs_id & SWX_INFO) {
+ if (km->target == KM_CAM && swx == 0) {
+ /* SWX */
+ if (km->match_map[i]->extr_start_offs_id == SB_VNI) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - VNI");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_MAC_PORT) {
+ NT_LOG(DBG, FILTER,
+ "Set KM SWX sel A - PTC + MAC");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_KCC_ID) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - KCC ID");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ return -1;
+ }
+
+ swx++;
+
+ } else {
+ if (sw == 0) {
+ /* DW8 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_SEL_A, index, 0,
+ DW8_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW8 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else if (sw == 1) {
+ /* DW10 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_SEL_A, index, 0,
+ DW10_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW10 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else {
+ return -1;
+ }
+
+ sw++;
+ }
+
+ break;
+
+ case KM_USE_EXTRACTOR_QWORD:
+ if (qw == 0) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW0 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else if (qw == 1) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW4 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else {
+ return -1;
+ }
+
+ qw++;
+ break;
+
+ default:
+ return -1;
+ }
+ }
+
+ /* set mask A */
+ for (int i = 0; i < km->key_word_size; i++) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_MASK_A, index,
+ (km->be->km.nb_km_rcp_mask_a_word_size - 1) - i,
+ km->entry_mask[i]);
+ NT_LOG(DBG, FILTER, "Set KM mask A: %08x", km->entry_mask[i]);
+ }
+
+ if (km->target == KM_CAM) {
+ /* set info - Color */
+ if (km->info_set) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_INFO_A, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM info A");
+ }
+
+ /* set key length A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_EL_A, index, 0,
+ km->key_word_size + !!km->info_set - 1); /* select id is -1 */
+ /* set Flow Type for Key A */
+ NT_LOG(DBG, FILTER, "Set KM EL A: %i", km->key_word_size + !!km->info_set - 1);
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_FTM_A, index, 0, 1 << km->flow_type);
+
+ NT_LOG(DBG, FILTER, "Set KM FTM A - ft: %i", km->flow_type);
+
+ /* Set Paired - only on the CAM part though... TODO split CAM and TCAM */
+ if ((uint32_t)(km->key_word_size + !!km->info_set) >
+ km->be->km.nb_cam_record_words) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PAIRED, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM CAM Paired");
+ km->cam_paired = 1;
+ }
+
+ } else if (km->target == KM_TCAM) {
+ uint32_t bank_bm = 0;
+
+ if (tcam_find_mapping(km) < 0) {
+ /* failed mapping into TCAM */
+ NT_LOG(DBG, FILTER, "INFO: TCAM mapping flow failed");
+ return -1;
+ }
+
+ assert((uint32_t)(km->tcam_start_bank + km->key_word_size) <=
+ km->be->km.nb_tcam_banks);
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ bank_bm |=
+ (1 << (km->be->km.nb_tcam_banks - 1 - (km->tcam_start_bank + i)));
+ }
+
+ /* Set BANK_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_BANK_A, index, 0, bank_bm);
+ /* Set Kl_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_KL_A, index, 0, km->key_word_size - 1);
+
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cam_populate(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ km->entry_word[i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = km;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1,
+ km->entry_word[km->be->km.nb_cam_record_words + i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = km;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+
+ return res;
+}
+
+static int cam_reset_entry(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = NULL;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = NULL;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+ return res;
+}
+
+static int move_cuckoo_index(struct km_flow_def_s *km)
+{
+ assert(km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner);
+
+ for (uint32_t bank = 0; bank < km->be->km.nb_cam_banks; bank++) {
+ /* It will not select itself */
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner == NULL) {
+ if (km->cam_paired) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner != NULL)
+ continue;
+ }
+
+ /*
+ * Populate in new position
+ */
+ int res = cam_populate(km, bank);
+
+ if (res) {
+ NT_LOG(DBG, FILTER,
+ "Error: failed to write to KM CAM in cuckoo move");
+ return 0;
+ }
+
+ /*
+ * Reset/free entry in old bank
+ * HW flushes are really not needed, the old addresses are always taken
+ * over by the caller If you change this code in future updates, this may
+ * no longer be true then!
+ */
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = NULL;
+
+ if (km->cam_paired)
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "KM Cuckoo hash moved from bank %i to bank %i (%04X => %04X)",
+ km->bank_used, bank, CAM_KM_DIST_IDX(km->bank_used),
+ CAM_KM_DIST_IDX(bank));
+ km->bank_used = bank;
+ (*km->cuckoo_moves)++;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx, int levels,
+ int cam_adr_list_len)
+{
+ struct km_flow_def_s *km = km_parent->cam_dist[bank_idx].km_owner;
+
+ assert(levels <= CUCKOO_MOVE_MAX_DEPTH);
+
+ /*
+ * Only move if same pairness
+ * Can be extended later to handle both move of paired and single entries
+ */
+ if (!km || km_parent->cam_paired != km->cam_paired)
+ return 0;
+
+ if (move_cuckoo_index(km))
+ return 1;
+
+ if (levels <= 1)
+ return 0;
+
+ assert(cam_adr_list_len < CUCKOO_MOVE_MAX_DEPTH);
+
+ cam_addr_reserved_stack[cam_adr_list_len++] = bank_idx;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ int reserved = 0;
+ int new_idx = CAM_KM_DIST_IDX(i);
+
+ for (int i_reserved = 0; i_reserved < cam_adr_list_len; i_reserved++) {
+ if (cam_addr_reserved_stack[i_reserved] == new_idx) {
+ reserved = 1;
+ break;
+ }
+ }
+
+ if (reserved)
+ continue;
+
+ int res = move_cuckoo_index_level(km, new_idx, levels - 1, cam_adr_list_len);
+
+ if (res) {
+ if (move_cuckoo_index(km))
+ return 1;
+
+ assert(0);
+ }
+ }
+
+ return 0;
+}
+
+static int km_write_data_to_cam(struct km_flow_def_s *km)
+{
+ int res = 0;
+ assert(km->be->km.nb_cam_banks <= MAX_BANKS);
+ assert(km->cam_dist);
+
+ NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
+ km->record_indexes[1], km->record_indexes[2]);
+
+ if (km->info_set)
+ km->entry_word[km->key_word_size] = km->info; /* finally set info */
+
+ int bank = -1;
+
+ /*
+ * first step, see if any of the banks are free
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(i_bank)].km_owner == NULL) {
+ if (km->cam_paired == 0 ||
+ km->cam_dist[CAM_KM_DIST_IDX(i_bank) + 1].km_owner == NULL) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0) {
+ /*
+ * Second step - cuckoo move existing flows if possible
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (move_cuckoo_index_level(km, CAM_KM_DIST_IDX(i_bank), 4, 0)) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0)
+ return -1;
+
+ /* populate CAM */
+ NT_LOG(DBG, FILTER, "KM Bank = %i (addr %04X)", bank, CAM_KM_DIST_IDX(bank));
+ res = cam_populate(km, bank);
+
+ if (res == 0) {
+ km->flushed_to_target = 1;
+ km->bank_used = bank;
+ }
+
+ return res;
+}
+
+/*
+ * TCAM
+ */
+static int tcam_find_free_record(struct km_flow_def_s *km, int start_bank)
+{
+ for (uint32_t rec = 0; rec < km->be->km.nb_tcam_bank_width; rec++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank, rec)].km_owner == NULL) {
+ int pass = 1;
+
+ for (int ii = 1; ii < km->key_word_size; ii++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank + ii, rec)].km_owner !=
+ NULL) {
+ pass = 0;
+ break;
+ }
+ }
+
+ if (pass) {
+ km->tcam_record = rec;
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int tcam_find_mapping(struct km_flow_def_s *km)
+{
+ /* Search record and start index for this flow */
+ for (int bs_idx = 0; bs_idx < km->num_start_offsets; bs_idx++) {
+ if (tcam_find_free_record(km, km->start_offsets[bs_idx])) {
+ km->tcam_start_bank = km->start_offsets[bs_idx];
+ NT_LOG(DBG, FILTER, "Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static int tcam_write_word(struct km_flow_def_s *km, int bank, int record, uint32_t word,
+ uint32_t mask)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ uint8_t a = (uint8_t)((word >> (24 - (byte * 8))) & 0xff);
+ uint8_t a_m = (uint8_t)((mask >> (24 - (byte * 8))) & 0xff);
+ /* calculate important value bits */
+ a = a & a_m;
+
+ for (int val = 0; val < 256; val++) {
+ err |= hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if ((val & a_m) == a)
+ all_recs[rec_val] |= rec_bit;
+ else
+ all_recs[rec_val] &= ~rec_bit;
+
+ err |= hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ /* flush bank */
+ err |= hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+
+ if (err == 0) {
+ assert(km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner == NULL);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = km;
+ }
+
+ return err;
+}
+
+static int km_write_data_to_tcam(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_record < 0) {
+ tcam_find_free_record(km, km->tcam_start_bank);
+
+ if (km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER, "Reused RCP: Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ }
+
+ /* Write KM_TCI */
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record,
+ km->info);
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record,
+ km->flow_type);
+ err |= hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++) {
+ err = tcam_write_word(km, km->tcam_start_bank + i, km->tcam_record,
+ km->entry_word[i], km->entry_mask[i]);
+ }
+
+ if (err == 0)
+ km->flushed_to_target = 1;
+
+ return err;
+}
+
+static int tcam_reset_bank(struct km_flow_def_s *km, int bank, int record)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ for (int val = 0; val < 256; val++) {
+ err = hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+
+ all_recs[rec_val] &= ~rec_bit;
+ err = hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ if (err)
+ return err;
+
+ /* flush bank */
+ err = hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER, "Reset TCAM bank %i, rec_val %i rec bit %08x", bank, rec_val,
+ rec_bit);
+
+ return err;
+}
+
+static int tcam_reset_entry(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_start_bank < 0 || km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ /* Write KM_TCI */
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++)
+ err = tcam_reset_bank(km, km->tcam_start_bank + i, km->tcam_record);
+
+ return err;
+}
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color)
+{
+ int res = -1;
+
+ km->info = color;
+ NT_LOG(DBG, FILTER, "Write Data entry Color: %08x", color);
+
+ switch (km->target) {
+ case KM_CAM:
+ res = km_write_data_to_cam(km);
+ break;
+
+ case KM_TCAM:
+ res = km_write_data_to_tcam(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ break;
+ }
+
+ return res;
+}
+
+int km_clear_data_match_entry(struct km_flow_def_s *km)
+{
+ int res = 0;
+
+ if (km->root) {
+ struct km_flow_def_s *km1 = km->root;
+
+ while (km1->reference != km)
+ km1 = km1->reference;
+
+ km1->reference = km->reference;
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->reference) {
+ km->reference->root = NULL;
+
+ switch (km->target) {
+ case KM_CAM:
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = km->reference;
+
+ if (km->key_word_size + !!km->info_set > 1) {
+ assert(km->cam_paired);
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner =
+ km->reference;
+ }
+
+ break;
+
+ case KM_TCAM:
+ for (int i = 0; i < km->key_word_size; i++) {
+ km->tcam_dist[TCAM_DIST_IDX(km->tcam_start_bank + i,
+ km->tcam_record)]
+ .km_owner = km->reference;
+ }
+
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->flushed_to_target) {
+ switch (km->target) {
+ case KM_CAM:
+ res = cam_reset_entry(km, km->bank_used);
+ break;
+
+ case KM_TCAM:
+ res = tcam_reset_entry(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
index 532884ca01..b8a30671c3 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
@@ -165,6 +165,240 @@ int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
return be->iface->km_rcp_flush(be->be_dev, &be->km, start_idx, count);
}
+static int hw_mod_km_rcp_mod(struct flow_api_backend_s *be, enum hw_km_e field, int index,
+ int word_off, uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->km.nb_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.rcp[index], (uint8_t)*value, sizeof(struct km_v7_rcp_s));
+ break;
+
+ case HW_KM_RCP_QW0_DYN:
+ GET_SET(be->km.v7.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_b, value);
+ break;
+
+ case HW_KM_RCP_QW4_DYN:
+ GET_SET(be->km.v7.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW8_DYN:
+ GET_SET(be->km.v7.rcp[index].dw8_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW8_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw8_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW10_DYN:
+ GET_SET(be->km.v7.rcp[index].dw10_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW10_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw10_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_b, value);
+ break;
+
+ case HW_KM_RCP_SWX_CCH:
+ GET_SET(be->km.v7.rcp[index].swx_cch, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_A:
+ GET_SET(be->km.v7.rcp[index].swx_sel_a, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_B:
+ GET_SET(be->km.v7.rcp[index].swx_sel_b, value);
+ break;
+
+ case HW_KM_RCP_MASK_A:
+ if (word_off > KM_RCP_MASK_D_A_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_d_a[word_off], value);
+ break;
+
+ case HW_KM_RCP_MASK_B:
+ if (word_off > KM_RCP_MASK_B_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_b[word_off], value);
+ break;
+
+ case HW_KM_RCP_DUAL:
+ GET_SET(be->km.v7.rcp[index].dual, value);
+ break;
+
+ case HW_KM_RCP_PAIRED:
+ GET_SET(be->km.v7.rcp[index].paired, value);
+ break;
+
+ case HW_KM_RCP_EL_A:
+ GET_SET(be->km.v7.rcp[index].el_a, value);
+ break;
+
+ case HW_KM_RCP_EL_B:
+ GET_SET(be->km.v7.rcp[index].el_b, value);
+ break;
+
+ case HW_KM_RCP_INFO_A:
+ GET_SET(be->km.v7.rcp[index].info_a, value);
+ break;
+
+ case HW_KM_RCP_INFO_B:
+ GET_SET(be->km.v7.rcp[index].info_b, value);
+ break;
+
+ case HW_KM_RCP_FTM_A:
+ GET_SET(be->km.v7.rcp[index].ftm_a, value);
+ break;
+
+ case HW_KM_RCP_FTM_B:
+ GET_SET(be->km.v7.rcp[index].ftm_b, value);
+ break;
+
+ case HW_KM_RCP_BANK_A:
+ GET_SET(be->km.v7.rcp[index].bank_a, value);
+ break;
+
+ case HW_KM_RCP_BANK_B:
+ GET_SET(be->km.v7.rcp[index].bank_b, value);
+ break;
+
+ case HW_KM_RCP_KL_A:
+ GET_SET(be->km.v7.rcp[index].kl_a, value);
+ break;
+
+ case HW_KM_RCP_KL_B:
+ GET_SET(be->km.v7.rcp[index].kl_b, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_A:
+ GET_SET(be->km.v7.rcp[index].keyway_a, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_B:
+ GET_SET(be->km.v7.rcp[index].keyway_b, value);
+ break;
+
+ case HW_KM_RCP_SYNERGY_MODE:
+ GET_SET(be->km.v7.rcp[index].synergy_mode, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw0_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw0_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw2_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw2_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw4_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw4_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw5_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw5_b_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, &value, 0);
+}
+
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, value, 1);
+}
+
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -180,6 +414,103 @@ int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_cam_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_cam_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ if ((unsigned int)bank >= be->km.nb_cam_banks) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ if ((unsigned int)record >= be->km.nb_cam_records) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ unsigned int index = bank * be->km.nb_cam_records + record;
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_CAM_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.cam[index], (uint8_t)*value, sizeof(struct km_v7_cam_s));
+ break;
+
+ case HW_KM_CAM_W0:
+ GET_SET(be->km.v7.cam[index].w0, value);
+ break;
+
+ case HW_KM_CAM_W1:
+ GET_SET(be->km.v7.cam[index].w1, value);
+ break;
+
+ case HW_KM_CAM_W2:
+ GET_SET(be->km.v7.cam[index].w2, value);
+ break;
+
+ case HW_KM_CAM_W3:
+ GET_SET(be->km.v7.cam[index].w3, value);
+ break;
+
+ case HW_KM_CAM_W4:
+ GET_SET(be->km.v7.cam[index].w4, value);
+ break;
+
+ case HW_KM_CAM_W5:
+ GET_SET(be->km.v7.cam[index].w5, value);
+ break;
+
+ case HW_KM_CAM_FT0:
+ GET_SET(be->km.v7.cam[index].ft0, value);
+ break;
+
+ case HW_KM_CAM_FT1:
+ GET_SET(be->km.v7.cam[index].ft1, value);
+ break;
+
+ case HW_KM_CAM_FT2:
+ GET_SET(be->km.v7.cam[index].ft2, value);
+ break;
+
+ case HW_KM_CAM_FT3:
+ GET_SET(be->km.v7.cam[index].ft3, value);
+ break;
+
+ case HW_KM_CAM_FT4:
+ GET_SET(be->km.v7.cam[index].ft4, value);
+ break;
+
+ case HW_KM_CAM_FT5:
+ GET_SET(be->km.v7.cam[index].ft5, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_cam_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count)
{
if (count == ALL_ENTRIES)
@@ -273,6 +604,12 @@ int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int ba
return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 0);
}
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set)
+{
+ return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 1);
+}
+
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -288,6 +625,49 @@ int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_tci_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_tci_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ unsigned int index = bank * be->km.nb_tcam_bank_width + record;
+
+ if (index >= (be->km.nb_tcam_banks * be->km.nb_tcam_bank_width)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_TCI_COLOR:
+ GET_SET(be->km.v7.tci[index].color, value);
+ break;
+
+ case HW_KM_TCI_FT:
+ GET_SET(be->km.v7.tci[index].ft, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_tci_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5572662647..4737460cdf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -40,7 +40,19 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_km_rcp {
+ struct hw_db_inline_km_rcp_data data;
+ int ref;
+
+ struct hw_db_inline_resource_db_km_ft {
+ struct hw_db_inline_km_ft_data data;
+ int ref;
+ } *ft;
+ } *km;
+
uint32_t nb_cat;
+ uint32_t nb_km_ft;
+ uint32_t nb_km_rcp;
/* Hardware */
@@ -91,6 +103,25 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_km_ft = ndev->be.cat.nb_flow_types;
+ db->nb_km_rcp = ndev->be.km.nb_categories;
+ db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
+
+ if (db->km == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ db->km[i].ft = calloc(db->nb_km_ft * db->nb_cat,
+ sizeof(struct hw_db_inline_resource_db_km_ft));
+
+ if (db->km[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
*db_handle = db;
return 0;
}
@@ -104,6 +135,13 @@ void hw_db_inline_destroy(void *db_handle)
free(db->slc_lr);
free(db->cat);
+ if (db->km) {
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
+ free(db->km[i].ft);
+
+ free(db->km);
+ }
+
free(db->cfn);
free(db);
@@ -134,12 +172,61 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_KM_RCP:
+ hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
+ break;
+
default:
break;
}
}
}
+
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type != type)
+ continue;
+
+ switch (type) {
+ case HW_DB_IDX_TYPE_NONE:
+ return NULL;
+
+ case HW_DB_IDX_TYPE_CAT:
+ return &db->cat[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_QSL:
+ return &db->qsl[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_COT:
+ return &db->cot[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_SLC_LR:
+ return &db->slc_lr[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_KM_RCP:
+ return &db->km[idxs[i].id1].data;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
+ default:
+ return NULL;
+ }
+ }
+
+ return NULL;
+}
+
/******************************************************************************/
/* Filter */
/******************************************************************************/
@@ -614,3 +701,150 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cat[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* KM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_km_compare(const struct hw_db_inline_km_rcp_data *data1,
+ const struct hw_db_inline_km_rcp_data *data2)
+{
+ return data1->rcp == data2->rcp;
+}
+
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_km_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_RCP;
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ if (!found && db->km[i].ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (db->km[i].ref > 0 && hw_db_inline_km_compare(data, &db->km[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->km[idx.id1].data, data, sizeof(struct hw_db_inline_km_rcp_data));
+ db->km[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->km[idx.id1].ref += 1;
+}
+
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ (void)db_handle;
+
+ if (idx.error)
+ return;
+}
+
+/******************************************************************************/
+/* KM FT */
+/******************************************************************************/
+
+static int hw_db_inline_km_ft_compare(const struct hw_db_inline_km_ft_data *data1,
+ const struct hw_db_inline_km_ft_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[data->km.id1];
+ struct hw_db_km_ft idx = { .raw = 0 };
+ uint32_t cat_offset = data->cat.ids * db->nb_cat;
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_FT;
+ idx.id2 = data->km.id1;
+ idx.id3 = data->cat.ids;
+
+ if (km_rcp->data.rcp == 0) {
+ idx.id1 = 0;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_km_ft; ++i) {
+ const struct hw_db_inline_resource_db_km_ft *km_ft = &km_rcp->ft[cat_offset + i];
+
+ if (!found && km_ft->ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (km_ft->ref > 0 && hw_db_inline_km_ft_compare(data, &km_ft->data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&km_rcp->ft[cat_offset + idx.id1].data, data,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error) {
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+ db->km[idx.id2].ft[cat_offset + idx.id1].ref += 1;
+ }
+}
+
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[idx.id2];
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+
+ if (idx.error)
+ return;
+
+ km_rcp->ft[cat_offset + idx.id1].ref -= 1;
+
+ if (km_rcp->ft[cat_offset + idx.id1].ref <= 0) {
+ memset(&km_rcp->ft[cat_offset + idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index d0435acaef..e104ba7327 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_action_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cot_idx {
HW_DB_IDX;
};
@@ -48,12 +52,22 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_km_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_km_ft {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_KM_FT,
};
/* Functionality data types */
@@ -123,6 +137,16 @@ struct hw_db_inline_action_set_data {
};
};
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -130,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle);
void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
uint32_t size);
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
@@ -158,6 +184,18 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
/**/
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data);
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data);
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+
+/**/
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6d72f8d99b..beb7fe2cb3 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2335,6 +2335,23 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ const bool empty_pattern = fd_has_empty_pattern(fd);
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
+ local_idxs[(*local_idx_counter)++] = cot_idx.raw;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Finalize QSL */
struct hw_db_qsl_idx qsl_idx =
hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
@@ -2429,6 +2446,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ int identical_km_entry_ft = -1;
+
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -2503,6 +2522,130 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ /* Setup KM RCP */
+ struct hw_db_inline_km_rcp_data km_rcp_data = { .rcp = 0 };
+
+ if (fd->km.num_ftype_elem) {
+ struct flow_handle *flow = dev->ndev->flow_base, *found_flow = NULL;
+
+ if (km_key_create(&fd->km, fh->port_id)) {
+ NT_LOG(ERR, FILTER, "KM creation failed");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.be = &dev->ndev->be;
+
+ /* Look for existing KM RCPs */
+ while (flow) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW &&
+ flow->fd->km.flow_type) {
+ int res = km_key_compare(&fd->km, &flow->fd->km);
+
+ if (res < 0) {
+ /* Flow rcp and match data is identical */
+ identical_km_entry_ft = flow->fd->km.flow_type;
+ found_flow = flow;
+ break;
+ }
+
+ if (res > 0) {
+ /* Flow rcp found and match data is different */
+ found_flow = flow;
+ }
+ }
+
+ flow = flow->next;
+ }
+
+ km_attach_ndev_resource_management(&fd->km, &dev->ndev->km_res_handle);
+
+ if (found_flow != NULL) {
+ /* Reuse existing KM RCP */
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)
+ found_flow->flm_db_idxs,
+ found_flow->flm_db_idx_counter);
+
+ if (other_km_rcp_data == NULL ||
+ flow_nic_ref_resource(dev->ndev, RES_KM_CATEGORY,
+ other_km_rcp_data->rcp)) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference existing KM RCP resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_data.rcp = other_km_rcp_data->rcp;
+ } else {
+ /* Alloc new KM RCP */
+ int rcp = flow_nic_alloc_resource(dev->ndev, RES_KM_CATEGORY, 1);
+
+ if (rcp < 0) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference KM RCP resource (flow_nic_alloc)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_set(&fd->km, rcp);
+ km_rcp_data.rcp = (uint32_t)rcp;
+ }
+ }
+
+ struct hw_db_km_idx km_idx =
+ hw_db_inline_km_add(dev->ndev, dev->ndev->hw_db_handle, &km_rcp_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = km_idx.raw;
+
+ if (km_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM RCP resource (db_inline)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Setup KM FT */
+ struct hw_db_inline_km_ft_data km_ft_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ };
+ struct hw_db_km_ft km_ft_idx =
+ hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = km_ft_idx.raw;
+
+ if (km_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Finalize KM RCP */
+ if (fd->km.num_ftype_elem) {
+ if (identical_km_entry_ft >= 0 && identical_km_entry_ft != km_ft_idx.id1) {
+ NT_LOG(ERR, FILTER,
+ "Identical KM matches cannot have different KM FTs");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.flow_type = km_ft_idx.id1;
+
+ if (fd->km.target == KM_CAM) {
+ uint32_t ft_a_mask = 0;
+ hw_mod_km_rcp_get(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0, &ft_a_mask);
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0,
+ ft_a_mask | (1 << fd->km.flow_type));
+ }
+
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)km_rcp_data.rcp, 1);
+
+ km_write_data_match_entry(&fd->km, 0);
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2783,6 +2926,25 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
} else {
NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->fd->km.num_ftype_elem) {
+ km_clear_data_match_entry(&fh->fd->km);
+
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ if (other_km_rcp_data != NULL &&
+ flow_nic_deref_resource(dev->ndev, RES_KM_CATEGORY,
+ (int)other_km_rcp_data->rcp) == 0) {
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_PRESET_ALL,
+ (int)other_km_rcp_data->rcp, 0, 0);
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)other_km_rcp_data->rcp,
+ 1);
+ }
+ }
+
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 31/73] net/ntnic: add hash API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (29 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 30/73] net/ntnic: add KM module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 32/73] net/ntnic: add TPE module Serhii Iliushyk
` (41 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Hasher module calculates a configurable hash value
to be used internally by the FPGA.
The module support both Toeplitz and NT-hash.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 40 +
drivers/net/ntnic/include/flow_api_engine.h | 17 +
drivers/net/ntnic/include/hw_mod_backend.h | 20 +
.../ntnic/include/stream_binary_flow_api.h | 25 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 212 +++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 ++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 25 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 ++++
.../profile_inline/flow_api_hw_db_inline.c | 142 +++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 850 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 4 +
drivers/net/ntnic/ntnic_mod_reg.h | 4 +
15 files changed, 1706 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index edffd0a57a..2e96fa5bed 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -29,6 +29,37 @@ struct hw_mod_resource_s {
*/
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev);
+/**
+ * A structure used to configure the Receive Side Scaling (RSS) feature
+ * of an Ethernet port.
+ */
+struct nt_eth_rss_conf {
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t rss_key[MAX_RSS_KEY_LEN];
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
+ /**
+ * Hash algorithm.
+ */
+ enum rte_eth_hash_function algorithm;
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask);
+
struct flow_eth_dev {
/* NIC that owns this port device */
struct flow_nic_dev *ndev;
@@ -49,6 +80,11 @@ struct flow_eth_dev {
struct flow_eth_dev *next;
};
+enum flow_nic_hash_e {
+ HASH_ALGO_ROUND_ROBIN = 0,
+ HASH_ALGO_5TUPLE,
+};
+
/* registered NIC backends */
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
@@ -191,4 +227,8 @@ void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm);
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index a0f02f4e8a..e52363f04e 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,7 @@ struct km_flow_def_s {
int bank_used;
uint32_t *cuckoo_moves; /* for CAM statistics only */
struct cam_distrib_s *cam_dist;
+ struct hasher_s *hsh;
/* TCAM specific bank management */
struct tcam_distrib_s *tcam_dist;
@@ -136,6 +137,17 @@ struct km_flow_def_s {
int tcam_record;
};
+/*
+ * RSS configuration, see struct rte_flow_action_rss
+ */
+struct hsh_def_s {
+ enum rte_eth_hash_function func; /* RSS hash function to apply */
+ /* RSS hash types, see definition of RTE_ETH_RSS_* for hash calculation options */
+ uint64_t types;
+ uint32_t key_len; /* Hash key length in bytes. */
+ const uint8_t *key; /* Hash key. */
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -247,6 +259,11 @@ struct nic_flow_def {
* Key Matcher flow definitions
*/
struct km_flow_def_s km;
+
+ /*
+ * Hash module RSS definitions
+ */
+ struct hsh_def_s hsh;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 26903f2183..cee148807a 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -149,14 +149,27 @@ enum km_flm_if_select_e {
int debug
enum frame_offs_e {
+ DYN_SOF = 0,
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
+ DYN_MPLS = 3,
DYN_L3 = 4,
+ DYN_ID_IPV4_6 = 5,
+ DYN_FINAL_IP_DST = 6,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
+ DYN_TUN_PAYLOAD = 9,
+ DYN_TUN_L2 = 10,
+ DYN_TUN_VLAN = 11,
+ DYN_TUN_MPLS = 12,
DYN_TUN_L3 = 13,
+ DYN_TUN_ID_IPV4_6 = 14,
+ DYN_TUN_FINAL_IP_DST = 15,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ DYN_EOF = 18,
+ DYN_L3_PAYLOAD_END = 19,
+ DYN_TUN_L3_PAYLOAD_END = 20,
SB_VNI = SWX_INFO | 1,
SB_MAC_PORT = SWX_INFO | 2,
SB_KCC_ID = SWX_INFO | 3
@@ -227,6 +240,11 @@ enum {
};
+enum {
+ HASH_HASH_NONE = 0,
+ HASH_5TUPLE = 8,
+};
+
enum {
CPY_SELECT_DSCP_IPV4 = 0,
CPY_SELECT_DSCP_IPV6 = 1,
@@ -670,6 +688,8 @@ int hw_mod_hsh_alloc(struct flow_api_backend_s *be);
void hw_mod_hsh_free(struct flow_api_backend_s *be);
int hw_mod_hsh_reset(struct flow_api_backend_s *be);
int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value);
struct qsl_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 8097518d61..e5fe686d99 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,6 +12,31 @@
/* Max RSS hash key length in bytes */
#define MAX_RSS_KEY_LEN 40
+/* NT specific MASKs for RSS configuration */
+/* NOTE: Masks are required for correct RSS configuration, do not modify them! */
+#define NT_ETH_RSS_IPV4_MASK \
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define NT_ETH_RSS_IPV6_MASK \
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NT_ETH_RSS_IP_MASK \
+ (NT_ETH_RSS_IPV4_MASK | NT_ETH_RSS_IPV6_MASK | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+/* List of all RSS flags supported for RSS calculation offload */
+#define NT_ETH_RSS_OFFLOAD_MASK \
+ (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM | RTE_ETH_RSS_PORT | RTE_ETH_RSS_GTPU)
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e1fef37ccb..d7e6d05556 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -56,6 +56,7 @@ sources = files(
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
+ 'nthw/flow_api/flow_hasher.c',
'nthw/flow_api/flow_kcc.c',
'nthw/flow_api/flow_km.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a51d621ef9..043e4244fc 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,8 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "ntlog.h"
+#include "nt_util.h"
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
@@ -12,6 +14,11 @@
#define SCATTER_GATHER
+#define RSS_TO_STRING(name) \
+ { \
+ name, #name \
+ }
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -807,6 +814,211 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
return ndev->be.be_dev;
}
+/* Information for a given RSS type. */
+struct rss_type_info {
+ uint64_t rss_type;
+ const char *str;
+};
+
+static struct rss_type_info rss_to_string[] = {
+ /* RTE_BIT64(2) IPv4 dst + IPv4 src */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4),
+ /* RTE_BIT64(3) IPv4 dst + IPv4 src + Identification of group of fragments */
+ RSS_TO_STRING(RTE_ETH_RSS_FRAG_IPV4),
+ /* RTE_BIT64(4) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_TCP),
+ /* RTE_BIT64(5) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_UDP),
+ /* RTE_BIT64(6) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_SCTP),
+ /* RTE_BIT64(7) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_OTHER),
+ /*
+ * RTE_BIT64(14) 128-bits of L2 payload starting after src MAC, i.e. including optional
+ * VLAN tag and ethertype. Overrides all L3 and L4 flags at the same level, but inner
+ * L2 payload can be combined with outer S-VLAN and GTPU TEID flags.
+ */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_PAYLOAD),
+ /* RTE_BIT64(18) L4 dst + L4 src + L4 protocol - see comment of RTE_ETH_RSS_L4_CHKSUM */
+ RSS_TO_STRING(RTE_ETH_RSS_PORT),
+ /* RTE_BIT64(19) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_VXLAN),
+ /* RTE_BIT64(20) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_GENEVE),
+ /* RTE_BIT64(21) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_NVGRE),
+ /* RTE_BIT64(23) GTP TEID - always from outer GTPU header */
+ RSS_TO_STRING(RTE_ETH_RSS_GTPU),
+ /* RTE_BIT64(24) MAC dst + MAC src */
+ RSS_TO_STRING(RTE_ETH_RSS_ETH),
+ /* RTE_BIT64(25) outermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_S_VLAN),
+ /* RTE_BIT64(26) innermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_C_VLAN),
+ /* RTE_BIT64(27) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ESP),
+ /* RTE_BIT64(28) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_AH),
+ /* RTE_BIT64(29) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV3),
+ /* RTE_BIT64(30) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PFCP),
+ /* RTE_BIT64(31) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PPPOE),
+ /* RTE_BIT64(32) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ECPRI),
+ /* RTE_BIT64(33) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_MPLS),
+ /* RTE_BIT64(34) IPv4 Header checksum + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4_CHKSUM),
+
+ /*
+ * if combined with RTE_ETH_RSS_NONFRAG_IPV4_[TCP|UDP|SCTP] then
+ * L4 protocol + chosen protocol header Checksum
+ * else
+ * error
+ */
+ /* RTE_BIT64(35) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_CHKSUM),
+#ifndef ANDROMEDA_DPDK_21_11
+ /* RTE_BIT64(36) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV2),
+#endif
+
+ { RTE_BIT64(37), "unknown_RTE_BIT64(37)" },
+ { RTE_BIT64(38), "unknown_RTE_BIT64(38)" },
+ { RTE_BIT64(39), "unknown_RTE_BIT64(39)" },
+ { RTE_BIT64(40), "unknown_RTE_BIT64(40)" },
+ { RTE_BIT64(41), "unknown_RTE_BIT64(41)" },
+ { RTE_BIT64(42), "unknown_RTE_BIT64(42)" },
+ { RTE_BIT64(43), "unknown_RTE_BIT64(43)" },
+ { RTE_BIT64(44), "unknown_RTE_BIT64(44)" },
+ { RTE_BIT64(45), "unknown_RTE_BIT64(45)" },
+ { RTE_BIT64(46), "unknown_RTE_BIT64(46)" },
+ { RTE_BIT64(47), "unknown_RTE_BIT64(47)" },
+ { RTE_BIT64(48), "unknown_RTE_BIT64(48)" },
+ { RTE_BIT64(49), "unknown_RTE_BIT64(49)" },
+
+ /* RTE_BIT64(50) outermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_OUTERMOST),
+ /* RTE_BIT64(51) innermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_INNERMOST),
+
+ /* RTE_BIT64(52) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE96),
+ /* RTE_BIT64(53) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE64),
+ /* RTE_BIT64(54) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE56),
+ /* RTE_BIT64(55) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE48),
+ /* RTE_BIT64(56) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE40),
+ /* RTE_BIT64(57) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE32),
+
+ /* RTE_BIT64(58) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_DST_ONLY),
+ /* RTE_BIT64(59) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_SRC_ONLY),
+ /* RTE_BIT64(60) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_DST_ONLY),
+ /* RTE_BIT64(61) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_SRC_ONLY),
+ /* RTE_BIT64(62) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_DST_ONLY),
+ /* RTE_BIT64(63) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_SRC_ONLY),
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask)
+{
+ if (str == NULL || str_len == 0)
+ return -1;
+
+ memset(str, 0x0, str_len);
+ uint16_t str_end = 0;
+ const struct rss_type_info *start = rss_to_string;
+
+ for (const struct rss_type_info *p = start; p != start + ARRAY_SIZE(rss_to_string); ++p) {
+ if (p->rss_type & hash_mask) {
+ if (strlen(prefix) + strlen(p->str) < (size_t)(str_len - str_end)) {
+ snprintf(str + str_end, str_len - str_end, "%s", prefix);
+ str_end += strlen(prefix);
+ snprintf(str + str_end, str_len - str_end, "%s", p->str);
+ str_end += strlen(p->str);
+
+ } else {
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Hash
+ */
+
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm)
+{
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ switch (algorithm) {
+ case HASH_ALGO_5TUPLE:
+ /* need to create an IPv6 hashing and enable the adaptive ip mask bit */
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_OFS, hsh_idx, 0, -16);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_PE, hsh_idx, 0, DYN_L4);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_PE, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_P, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 1, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 2, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 3, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 4, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 5, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 6, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 7, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 8, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 9, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_VALID, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_TYPE, hsh_idx, 0, HASH_5TUPLE);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0, 1);
+
+ NT_LOG(DBG, FILTER, "Set IPv6 5-tuple hasher with adaptive IPv4 hashing");
+ break;
+
+ default:
+ case HASH_ALGO_ROUND_ROBIN:
+ /* zero is round-robin */
+ break;
+ }
+
+ return 0;
+}
+
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.c b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
new file mode 100644
index 0000000000..86dfc16e79
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
@@ -0,0 +1,156 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <math.h>
+
+#include "flow_hasher.h"
+
+static uint32_t shuffle(uint32_t x)
+{
+ return ((x & 0x00000002) << 29) | ((x & 0xAAAAAAA8) >> 3) | ((x & 0x15555555) << 3) |
+ ((x & 0x40000000) >> 29);
+}
+
+static uint32_t ror_inv(uint32_t x, const int s)
+{
+ return (x >> s) | ((~x) << (32 - s));
+}
+
+static uint32_t combine(uint32_t x, uint32_t y)
+{
+ uint32_t x1 = ror_inv(x, 15);
+ uint32_t x2 = ror_inv(x, 13);
+ uint32_t y1 = ror_inv(y, 3);
+ uint32_t y2 = ror_inv(y, 27);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint32_t mix(uint32_t x, uint32_t y)
+{
+ return shuffle(combine(x, y));
+}
+
+static uint64_t ror_inv3(uint64_t x)
+{
+ const uint64_t m = 0xE0000000E0000000ULL;
+
+ return ((x >> 3) | m) ^ ((x << 29) & m);
+}
+
+static uint64_t ror_inv13(uint64_t x)
+{
+ const uint64_t m = 0xFFF80000FFF80000ULL;
+
+ return ((x >> 13) | m) ^ ((x << 19) & m);
+}
+
+static uint64_t ror_inv15(uint64_t x)
+{
+ const uint64_t m = 0xFFFE0000FFFE0000ULL;
+
+ return ((x >> 15) | m) ^ ((x << 17) & m);
+}
+
+static uint64_t ror_inv27(uint64_t x)
+{
+ const uint64_t m = 0xFFFFFFE0FFFFFFE0ULL;
+
+ return ((x >> 27) | m) ^ ((x << 5) & m);
+}
+
+static uint64_t shuffle64(uint64_t x)
+{
+ return ((x & 0x0000000200000002) << 29) | ((x & 0xAAAAAAA8AAAAAAA8) >> 3) |
+ ((x & 0x1555555515555555) << 3) | ((x & 0x4000000040000000) >> 29);
+}
+
+static uint64_t pair(uint32_t x, uint32_t y)
+{
+ return ((uint64_t)x << 32) | y;
+}
+
+static uint64_t combine64(uint64_t x, uint64_t y)
+{
+ uint64_t x1 = ror_inv15(x);
+ uint64_t x2 = ror_inv13(x);
+ uint64_t y1 = ror_inv3(y);
+ uint64_t y2 = ror_inv27(y);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint64_t mix64(uint64_t x, uint64_t y)
+{
+ return shuffle64(combine64(x, y));
+}
+
+static uint32_t calc16(const uint32_t key[16])
+{
+ /*
+ * 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Layer 0
+ * \./ \./ \./ \./ \./ \./ \./ \./
+ * 0 1 2 3 4 5 6 7 Layer 1
+ * \__.__/ \__.__/ \__.__/ \__.__/
+ * 0 1 2 3 Layer 2
+ * \______.______/ \______.______/
+ * 0 1 Layer 3
+ * \______________.______________/
+ * 0 Layer 4
+ * / \
+ * \./
+ * 0 Layer 5
+ * / \
+ * \./ Layer 6
+ * value
+ */
+
+ uint64_t z;
+ uint32_t x;
+
+ z = mix64(mix64(mix64(pair(key[0], key[8]), pair(key[1], key[9])),
+ mix64(pair(key[2], key[10]), pair(key[3], key[11]))),
+ mix64(mix64(pair(key[4], key[12]), pair(key[5], key[13])),
+ mix64(pair(key[6], key[14]), pair(key[7], key[15]))));
+
+ x = mix((uint32_t)(z >> 32), (uint32_t)z);
+ x = mix(x, ror_inv(x, 17));
+ x = combine(x, ror_inv(x, 17));
+
+ return x;
+}
+
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result)
+{
+ uint64_t val;
+ uint32_t res;
+
+ val = calc16(key);
+ res = (uint32_t)val;
+
+ if (hsh->cam_bw > 32)
+ val = (val << (hsh->cam_bw - 32)) ^ val;
+
+ for (int i = 0; i < hsh->banks; i++) {
+ result[i] = (unsigned int)(val & hsh->cam_records_bw_mask);
+ val = val >> hsh->cam_records_bw;
+ }
+
+ return res;
+}
+
+int init_hasher(struct hasher_s *hsh, int banks, int nb_records)
+{
+ hsh->banks = banks;
+ hsh->cam_records_bw = (int)(log2(nb_records - 1) + 1);
+ hsh->cam_records_bw_mask = (1U << hsh->cam_records_bw) - 1;
+ hsh->cam_bw = hsh->banks * hsh->cam_records_bw;
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.h b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
new file mode 100644
index 0000000000..15de8e9933
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_HASHER_H_
+#define _FLOW_HASHER_H_
+
+#include <stdint.h>
+
+struct hasher_s {
+ int banks;
+ int cam_records_bw;
+ uint32_t cam_records_bw_mask;
+ int cam_bw;
+};
+
+int init_hasher(struct hasher_s *hsh, int _banks, int nb_records);
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result);
+
+#endif /* _FLOW_HASHER_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 30d6ea728e..f79919cb81 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -9,6 +9,7 @@
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
#include "nt_util.h"
+#include "flow_hasher.h"
#define MAX_QWORDS 2
#define MAX_SWORDS 2
@@ -75,10 +76,25 @@ static int tcam_find_mapping(struct km_flow_def_s *km);
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
{
+ /*
+ * KM entries occupied in CAM - to manage the cuckoo shuffling
+ * and manage CAM population and usage
+ * KM entries occupied in TCAM - to manage population and usage
+ */
+ if (!*handle) {
+ *handle = calloc(1,
+ (size_t)CAM_ENTRIES + sizeof(uint32_t) + (size_t)TCAM_ENTRIES +
+ sizeof(struct hasher_s));
+ NT_LOG(DBG, FILTER, "Allocate NIC DEV CAM and TCAM record manager");
+ }
+
km->cam_dist = (struct cam_distrib_s *)*handle;
km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
km->tcam_dist =
(struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+
+ km->hsh = (struct hasher_s *)((char *)km->tcam_dist + TCAM_ENTRIES);
+ init_hasher(km->hsh, km->be->km.nb_cam_banks, km->be->km.nb_cam_records);
}
void km_free_ndev_resource_management(void **handle)
@@ -839,9 +855,18 @@ static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx
static int km_write_data_to_cam(struct km_flow_def_s *km)
{
int res = 0;
+ int val[MAX_BANKS];
assert(km->be->km.nb_cam_banks <= MAX_BANKS);
assert(km->cam_dist);
+ /* word list without info set */
+ gethash(km->hsh, km->entry_word, val);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ /* if paired we start always on an even address - reset bit 0 */
+ km->record_indexes[i] = (km->cam_paired) ? val[i] & ~1 : val[i];
+ }
+
NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
km->record_indexes[1], km->record_indexes[2]);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
index df5c00ac42..1750d09afb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
@@ -89,3 +89,182 @@ int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->hsh_rcp_flush(be->be_dev, &be->hsh, start_idx, count);
}
+
+static int hw_mod_hsh_rcp_mod(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t *value, int get)
+{
+ if (index >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 5:
+ switch (field) {
+ case HW_HSH_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->hsh.v5.rcp[index], (uint8_t)*value,
+ sizeof(struct hsh_v5_rcp_s));
+ break;
+
+ case HW_HSH_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off);
+ break;
+
+ case HW_HSH_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off,
+ be->hsh.nb_rcp);
+ break;
+
+ case HW_HSH_RCP_LOAD_DIST_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].load_dist_type, value);
+ break;
+
+ case HW_HSH_RCP_MAC_PORT_MASK:
+ if (word_off > HSH_RCP_MAC_PORT_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].mac_port_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SORT:
+ GET_SET(be->hsh.v5.rcp[index].sort, value);
+ break;
+
+ case HW_HSH_RCP_QW0_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw0_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_HSH_RCP_QW4_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw4_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_PE:
+ GET_SET(be->hsh.v5.rcp[index].w8_pe, value);
+ break;
+
+ case HW_HSH_RCP_W8_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w8_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w8_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_PE:
+ GET_SET(be->hsh.v5.rcp[index].w9_pe, value);
+ break;
+
+ case HW_HSH_RCP_W9_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w9_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W9_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w9_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_P:
+ GET_SET(be->hsh.v5.rcp[index].w9_p, value);
+ break;
+
+ case HW_HSH_RCP_P_MASK:
+ GET_SET(be->hsh.v5.rcp[index].p_mask, value);
+ break;
+
+ case HW_HSH_RCP_WORD_MASK:
+ if (word_off > HSH_RCP_WORD_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].word_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SEED:
+ GET_SET(be->hsh.v5.rcp[index].seed, value);
+ break;
+
+ case HW_HSH_RCP_TNL_P:
+ GET_SET(be->hsh.v5.rcp[index].tnl_p, value);
+ break;
+
+ case HW_HSH_RCP_HSH_VALID:
+ GET_SET(be->hsh.v5.rcp[index].hsh_valid, value);
+ break;
+
+ case HW_HSH_RCP_HSH_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].hsh_type, value);
+ break;
+
+ case HW_HSH_RCP_TOEPLITZ:
+ GET_SET(be->hsh.v5.rcp[index].toeplitz, value);
+ break;
+
+ case HW_HSH_RCP_K:
+ if (word_off > HSH_RCP_KEY_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].k[word_off], value);
+ break;
+
+ case HW_HSH_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->hsh.v5.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 5 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value)
+{
+ return hw_mod_hsh_rcp_mod(be, field, index, word_off, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4737460cdf..068c890b45 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,9 +30,15 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_hsh {
+ struct hw_db_inline_hsh_data data;
+ int ref;
+ } *hsh;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_hsh;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -122,6 +128,21 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
}
+ db->cfn = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cfn));
+
+ if (db->cfn == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_hsh = ndev->be.hsh.nb_rcp;
+ db->hsh = calloc(db->nb_hsh, sizeof(struct hw_db_inline_resource_db_hsh));
+
+ if (db->hsh == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -133,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->hsh);
+
free(db->cat);
if (db->km) {
@@ -180,6 +203,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_HSH:
+ hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -219,6 +246,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_KM_FT:
return NULL; /* FTs can't be easily looked up */
+ case HW_DB_IDX_TYPE_HSH:
+ return &db->hsh[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -247,6 +277,7 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
{
(void)ft;
(void)qsl_hw_id;
+ (void)ft;
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
@@ -848,3 +879,114 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* HSH */
+/******************************************************************************/
+
+static int hw_db_inline_hsh_compare(const struct hw_db_inline_hsh_data *data1,
+ const struct hw_db_inline_hsh_data *data2)
+{
+ for (uint32_t i = 0; i < MAX_RSS_KEY_LEN; ++i)
+ if (data1->key[i] != data2->key[i])
+ return 0;
+
+ return data1->func == data2->func && data1->hash_mask == data2->hash_mask;
+}
+
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_hsh_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_HSH;
+
+ /* check if default hash configuration shall be used, i.e. rss_hf is not set */
+ /*
+ * NOTE: hsh id 0 is reserved for "default"
+ * HSH used by port configuration; All ports share the same default hash settings.
+ */
+ if (data->hash_mask == 0) {
+ idx.ids = 0;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_hsh; ++i) {
+ int ref = db->hsh[i].ref;
+
+ if (ref > 0 && hw_db_inline_hsh_compare(data, &db->hsh[i].data)) {
+ idx.ids = i;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ struct nt_eth_rss_conf tmp_rss_conf;
+
+ tmp_rss_conf.rss_hf = data->hash_mask;
+ memcpy(tmp_rss_conf.rss_key, data->key, MAX_RSS_KEY_LEN);
+ tmp_rss_conf.algorithm = data->func;
+ int res = flow_nic_set_hasher_fields(ndev, idx.ids, tmp_rss_conf);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->hsh[idx.ids].ref = 1;
+ memcpy(&db->hsh[idx.ids].data, data, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, idx.ids);
+
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->hsh[idx.ids].ref += 1;
+}
+
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->hsh[idx.ids].ref -= 1;
+
+ if (db->hsh[idx.ids].ref <= 0) {
+ /*
+ * NOTE: hsh id 0 is reserved for "default" HSH used by
+ * port configuration, so we shall keep it even if
+ * it is not used by any flow
+ */
+ if (idx.ids > 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, idx.ids, 0, 0x0);
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->hsh[idx.ids].data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_free_resource(ndev, RES_HSH_RCP, idx.ids);
+ }
+
+ db->hsh[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index e104ba7327..c97bdef1b7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -60,6 +60,10 @@ struct hw_db_km_ft {
HW_DB_IDX;
};
+struct hw_db_hsh_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
@@ -68,6 +72,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_SLC_LR,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
+ HW_DB_IDX_TYPE_HSH,
};
/* Functionality data types */
@@ -133,6 +138,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_hsh_idx hsh;
};
};
};
@@ -175,6 +181,11 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data);
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index beb7fe2cb3..ebdf68385e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -25,6 +25,15 @@
#define NT_VIOLATING_MBR_CFN 0
#define NT_VIOLATING_MBR_QSL 1
+#define RTE_ETH_RSS_UDP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)
+
+#define RTE_ETH_RSS_TCP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX)
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2323,10 +2332,27 @@ static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_d
}
}
+static void setup_db_hsh_data(struct nic_flow_def *fd, struct hw_db_inline_hsh_data *hsh_data)
+{
+ memset(hsh_data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+
+ hsh_data->func = fd->hsh.func;
+ hsh_data->hash_mask = fd->hsh.types;
+
+ if (fd->hsh.key != NULL) {
+ /*
+ * Just a safeguard. Check and error handling of rss_key_len
+ * shall be done at api layers above.
+ */
+ memcpy(&hsh_data->key, fd->hsh.key,
+ fd->hsh.key_len < MAX_RSS_KEY_LEN ? fd->hsh.key_len : MAX_RSS_KEY_LEN);
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
- const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data,
uint32_t group __rte_unused,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
@@ -2363,6 +2389,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle, hsh_data);
+ local_idxs[(*local_idx_counter)++] = hsh_idx.raw;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2406,6 +2443,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
if (attr->group > 0 && fd_has_empty_pattern(fd)) {
/*
@@ -2489,6 +2527,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle,
+ &hsh_data);
+ fh->db_idxs[fh->db_idx_counter++] = hsh_idx.raw;
+ action_set_data.hsh = hsh_idx;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2668,6 +2719,126 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
return NULL;
}
+/*
+ * Public functions
+ */
+
+/*
+ * FPGA uses up to 10 32-bit words (320 bits) for hash calculation + 8 bits for L4 protocol number.
+ * Hashed data are split between two 128-bit Quad Words (QW)
+ * and two 32-bit Words (W), which can refer to different header parts.
+ */
+enum hsh_words_id {
+ HSH_WORDS_QW0 = 0,
+ HSH_WORDS_QW4,
+ HSH_WORDS_W8,
+ HSH_WORDS_W9,
+ HSH_WORDS_SIZE,
+};
+
+/* struct with details about hash QWs & Ws */
+struct hsh_words {
+ /*
+ * index of W (word) or index of 1st word of QW (quad word)
+ * is used for hash mask calculation
+ */
+ uint8_t index;
+ uint8_t toeplitz_index; /* offset in Bytes of given [Q]W inside Toeplitz RSS key */
+ enum hw_hsh_e pe; /* offset to header part, e.g. beginning of L4 */
+ enum hw_hsh_e ofs; /* relative offset in BYTES to 'pe' header offset above */
+ uint16_t bit_len; /* max length of header part in bits to fit into QW/W */
+ bool free; /* only free words can be used for hsh calculation */
+};
+
+static enum hsh_words_id get_free_word(struct hsh_words *words, uint16_t bit_len)
+{
+ enum hsh_words_id ret = HSH_WORDS_SIZE;
+ uint16_t ret_bit_len = UINT16_MAX;
+
+ for (enum hsh_words_id i = HSH_WORDS_QW0; i < HSH_WORDS_SIZE; i++) {
+ if (words[i].free && bit_len <= words[i].bit_len &&
+ words[i].bit_len < ret_bit_len) {
+ ret = i;
+ ret_bit_len = words[i].bit_len;
+ }
+ }
+
+ return ret;
+}
+
+static int flow_nic_set_hasher_part_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct hsh_words *words, uint32_t pe, uint32_t ofs,
+ int bit_len, bool toeplitz)
+{
+ int res = 0;
+
+ /* check if there is any free word, which can accommodate header part of given 'bit_len' */
+ enum hsh_words_id word = get_free_word(words, bit_len);
+
+ if (word == HSH_WORDS_SIZE) {
+ NT_LOG(ERR, FILTER, "Cannot add additional %d bits into hash", bit_len);
+ return -1;
+ }
+
+ words[word].free = false;
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].pe, hsh_idx, 0, pe);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].pe,
+ hsh_idx, pe);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].ofs, hsh_idx, 0, ofs);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].ofs,
+ hsh_idx, ofs);
+
+ /* set HW_HSH_RCP_WORD_MASK based on used QW/W and given 'bit_len' */
+ int mask_bit_len = bit_len;
+ uint32_t mask = 0x0;
+ uint32_t mask_be = 0x0;
+ uint32_t toeplitz_mask[9] = { 0x0 };
+ /* iterate through all words of QW */
+ uint16_t words_count = words[word].bit_len / 32;
+
+ for (uint16_t mask_off = 1; mask_off <= words_count; mask_off++) {
+ if (mask_bit_len >= 32) {
+ mask_bit_len -= 32;
+ mask = 0xffffffff;
+ mask_be = mask;
+
+ } else if (mask_bit_len > 0) {
+ /* keep bits from left to right, i.e. little to big endian */
+ mask_be = 0xffffffff >> (32 - mask_bit_len);
+ mask = mask_be << (32 - mask_bit_len);
+ mask_bit_len = 0;
+
+ } else {
+ mask = 0x0;
+ mask_be = 0x0;
+ }
+
+ /* reorder QW words mask from little to big endian */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx,
+ words[word].index + words_count - mask_off, mask);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, words[word].index + words_count - mask_off, mask);
+ toeplitz_mask[words[word].toeplitz_index + mask_off - 1] = mask_be;
+ }
+
+ if (toeplitz) {
+ NT_LOG(DBG, FILTER,
+ "Partial Toeplitz RSS key mask: %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 "",
+ toeplitz_mask[8], toeplitz_mask[7], toeplitz_mask[6], toeplitz_mask[5],
+ toeplitz_mask[4], toeplitz_mask[3], toeplitz_mask[2], toeplitz_mask[1],
+ toeplitz_mask[0]);
+ NT_LOG(DBG, FILTER,
+ " MSB LSB");
+ }
+
+ return res;
+}
+
/*
* Public functions
*/
@@ -2718,6 +2889,12 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+ /* Set default hasher recipe to 5-tuple */
+ flow_nic_set_hasher(ndev, 0, HASH_ALGO_5TUPLE);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -2784,6 +2961,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, 0, 0, 0);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
@@ -2981,6 +3162,672 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
+{
+ return (hash_mask & hash_bits) == hash_bits;
+}
+
+static __rte_always_inline void unset_bits(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ *hash_mask &= ~hash_bits;
+}
+
+static __rte_always_inline void unset_bits_and_log(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", *hash_mask & hash_bits) == 0)
+ NT_LOG(DBG, FILTER, "Configured RSS types:%s", rss_buffer);
+
+ unset_bits(hash_mask, hash_bits);
+}
+
+static __rte_always_inline void unset_bits_if_all_enabled(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ if (all_bits_enabled(*hash_mask, hash_bits))
+ unset_bits(hash_mask, hash_bits);
+}
+
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ uint64_t fields = rss_conf.rss_hf;
+
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", fields) == 0)
+ NT_LOG(DBG, FILTER, "Requested RSS types:%s", rss_buffer);
+
+ /*
+ * configure all (Q)Words usable for hash calculation
+ * Hash can be calculated from 4 independent header parts:
+ * | QW0 | Qw4 | W8| W9|
+ * word | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
+ */
+ struct hsh_words words[HSH_WORDS_SIZE] = {
+ { 0, 5, HW_HSH_RCP_QW0_PE, HW_HSH_RCP_QW0_OFS, 128, true },
+ { 4, 1, HW_HSH_RCP_QW4_PE, HW_HSH_RCP_QW4_OFS, 128, true },
+ { 8, 0, HW_HSH_RCP_W8_PE, HW_HSH_RCP_W8_OFS, 32, true },
+ {
+ 9, 255, HW_HSH_RCP_W9_PE, HW_HSH_RCP_W9_OFS, 32,
+ true
+ }, /* not supported for Toeplitz */
+ };
+
+ int res = 0;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+ /* enable hashing */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+
+ /* configure selected hash function and its key */
+ bool toeplitz = false;
+
+ switch (rss_conf.algorithm) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /* Use default NTH10 hashing algorithm */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 0);
+ /* Use 1st 32-bits from rss_key to configure NTH10 SEED */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0,
+ rss_conf.rss_key[0] << 24 | rss_conf.rss_key[1] << 16 |
+ rss_conf.rss_key[2] << 8 | rss_conf.rss_key[3]);
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ toeplitz = true;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 1);
+ uint8_t empty_key = 0;
+
+ /* Toeplitz key (always 40B) must be encoded from little to big endian */
+ for (uint8_t i = 0; i <= (MAX_RSS_KEY_LEN - 8); i += 8) {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 |
+ rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 |
+ rss_conf.rss_key[i + 7]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 | rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 | rss_conf.rss_key[i + 7]);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 |
+ rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 |
+ rss_conf.rss_key[i + 3]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 | rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 | rss_conf.rss_key[i + 3]);
+ empty_key |= rss_conf.rss_key[i] | rss_conf.rss_key[i + 1] |
+ rss_conf.rss_key[i + 2] | rss_conf.rss_key[i + 3] |
+ rss_conf.rss_key[i + 4] | rss_conf.rss_key[i + 5] |
+ rss_conf.rss_key[i + 6] | rss_conf.rss_key[i + 7];
+ }
+
+ if (empty_key == 0) {
+ NT_LOG(ERR, FILTER,
+ "Toeplitz key must be configured. Key with all bytes set to zero is not allowed.");
+ return -1;
+ }
+
+ words[HSH_WORDS_W9].free = false;
+ NT_LOG(DBG, FILTER,
+ "Toeplitz hashing is enabled thus W9 and P_MASK cannot be used.");
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Unknown hashing function %d requested", rss_conf.algorithm);
+ return -1;
+ }
+
+ /* indication that some IPv6 flag is present */
+ bool ipv6 = fields & (NT_ETH_RSS_IPV6_MASK);
+ /* store proto mask for later use at IP and L4 checksum handling */
+ uint64_t l4_proto_mask = fields &
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX);
+
+ /* outermost headers are used by default, so innermost bit takes precedence if detected */
+ bool outer = (fields & RTE_ETH_RSS_LEVEL_INNERMOST) ? false : true;
+ unset_bits(&fields, RTE_ETH_RSS_LEVEL_MASK);
+
+ if (fields == 0) {
+ NT_LOG(ERR, FILTER, "RSS hash configuration 0x%" PRIX64 " is not valid.",
+ rss_conf.rss_hf);
+ return -1;
+ }
+
+ /* indication that IPv4 `protocol` or IPv6 `next header` fields shall be part of the hash
+ */
+ bool l4_proto_hash = false;
+
+ /*
+ * check if SRC_ONLY & DST_ONLY are used simultaneously;
+ * According to DPDK, we shall behave like none of these bits is set
+ */
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ /* L2 */
+ if (fields & (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 6, 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 96, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 6,
+ 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 96, toeplitz);
+ }
+
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY |
+ RTE_ETH_RSS_L2_DST_ONLY);
+ }
+
+ /*
+ * VLAN support of multiple VLAN headers,
+ * where S-VLAN is the first and C-VLAN the last VLAN header
+ */
+ if (fields & RTE_ETH_RSS_C_VLAN) {
+ /*
+ * use MPLS protocol offset, which points just after ethertype with relative
+ * offset -6 (i.e. 2 bytes
+ * of ethertype & size + 4 bytes of VLAN header field) to access last vlan header
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, -6,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ -6, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_C_VLAN);
+ }
+
+ if (fields & RTE_ETH_RSS_S_VLAN) {
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_FIRST_VLAN, 0, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_VLAN,
+ 0, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_S_VLAN);
+ }
+ /* L2 payload */
+ /* calculate hash of 128-bits of l2 payload; Use MPLS protocol offset to address the
+ * beginning of L2 payload even if MPLS header is not present
+ */
+ if (fields & RTE_ETH_RSS_L2_PAYLOAD) {
+ uint64_t outer_fields_enabled = 0;
+
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ 0, 128, toeplitz);
+ outer_fields_enabled = fields & RTE_ETH_RSS_GTPU;
+ }
+
+ /*
+ * L2 PAYLOAD hashing overrides all L3 & L4 RSS flags.
+ * Thus we can clear all remaining (supported)
+ * RSS flags...
+ */
+ unset_bits_and_log(&fields, NT_ETH_RSS_OFFLOAD_MASK);
+ /*
+ * ...but in case of INNER L2 PAYLOAD we must process
+ * "always outer" GTPU field if enabled
+ */
+ fields |= outer_fields_enabled;
+ }
+
+ /* L3 + L4 protocol number */
+ if (fields & RTE_ETH_RSS_IPV4_CHKSUM) {
+ /* only IPv4 checksum is supported by DPDK RTE_ETH_RSS_* types */
+ if (ipv6) {
+ NT_LOG(ERR, FILTER,
+ "RSS: IPv4 checksum requested with IPv6 header hashing!");
+ res = 1;
+
+ } else if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L3, 10,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L3,
+ 10, 16, toeplitz);
+ }
+
+ /*
+ * L3 checksum is made from whole L3 header, i.e. no need to process other
+ * L3 hashing flags
+ */
+ unset_bits_and_log(&fields, RTE_ETH_RSS_IPV4_CHKSUM | NT_ETH_RSS_IP_MASK);
+ }
+
+ if (fields & NT_ETH_RSS_IP_MASK) {
+ if (ipv6) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & (RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6)) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 32, toeplitz);
+ }
+ }
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0,
+ 1);
+
+ } else {
+ /* IPv4 */
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 32, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 16,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 64, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 32,
+ toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 16, 32,
+ toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 64,
+ toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & RTE_ETH_RSS_FRAG_IPV4) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 16, toeplitz);
+ }
+ }
+ }
+
+ /* check if L4 protocol type shall be part of hash */
+ if (l4_proto_mask)
+ l4_proto_hash = true;
+
+ unset_bits_and_log(&fields, NT_ETH_RSS_IP_MASK);
+ }
+
+ /* L4 */
+ if (fields & (RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 2, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 32, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 2,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 32, toeplitz);
+ }
+
+ l4_proto_hash = true;
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY);
+ }
+
+ /* IPv4 protocol / IPv6 next header fields */
+ if (l4_proto_hash) {
+ /* NOTE: HW_HSH_RCP_P_MASK is not supported for Toeplitz and thus one of SW0, SW4
+ * or W8 must be used to hash on `protocol` field of IPv4 or `next header` field of
+ * IPv6 header.
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 6, 8,
+ toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 9, 8,
+ toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 0);
+ }
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 6, 8, toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 9, 8, toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 1);
+ }
+ }
+
+ l4_proto_hash = false;
+ }
+
+ /*
+ * GTPU - for UPF use cases we always use TEID from outermost GTPU header
+ * even if other headers are innermost
+ */
+ if (fields & RTE_ETH_RSS_GTPU) {
+ NT_LOG(DBG, FILTER, "Set outer GTPU TEID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L4_PAYLOAD, 4, 32,
+ toeplitz);
+ unset_bits_and_log(&fields, RTE_ETH_RSS_GTPU);
+ }
+
+ /* Checksums */
+ /* only UDP, TCP and SCTP checksums are supported */
+ if (fields & RTE_ETH_RSS_L4_CHKSUM) {
+ switch (l4_proto_mask) {
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_UDP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 6, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 6, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_TCP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 16, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 16, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 8, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 8, 32,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+
+ /* none or unsupported protocol was chosen */
+ case 0:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing is supported only for UDP, TCP and SCTP protocols");
+ res = -1;
+ break;
+
+ /* multiple L4 protocols were selected */
+ default:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing can be enabled just for one of UDP, TCP or SCTP protocols");
+ res = -1;
+ break;
+ }
+ }
+
+ if (fields || res != 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", rss_conf.rss_hf) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration%s is not supported for hash func %s.",
+ rss_buffer,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration 0x%" PRIX64
+ " is not supported for hash func %s.",
+ rss_conf.rss_hf,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+ }
+
+ return -1;
+ }
+
+ return res;
+}
+
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -2994,6 +3841,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b87f8542ac..e623bb2352 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,4 +38,8 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 149c549112..1069be2f85 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -252,6 +252,10 @@ struct profile_inline_ops {
int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 32/73] net/ntnic: add TPE module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (30 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 31/73] net/ntnic: add hash API Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 33/73] net/ntnic: add FLM module Serhii Iliushyk
` (40 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The TX Packet Editor is a software abstraction module,
that keeps track of the handful of FPGA modules
that are used to edit packets in the TX pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 16 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 373 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 70 ++
.../profile_inline/flow_api_profile_inline.c | 127 ++-
5 files changed, 1342 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index cee148807a..e16dcd478f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -889,24 +889,40 @@ void hw_mod_tpe_free(struct flow_api_backend_s *be);
int hw_mod_tpe_reset(struct flow_api_backend_s *be);
int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value);
int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
enum debug_mode_e {
FLOW_BACKEND_DEBUG_MODE_NONE = 0x0000,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index 0d73b795d5..ba8f2d0dbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -169,6 +169,82 @@ int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpp_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpp_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpp_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPP_RCP_EXP:
+ GET_SET(be->tpe.v3.rpp_rcp[index].exp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* IFR_RCP
*/
@@ -203,6 +279,90 @@ int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ins_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ins_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.ins_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_ins_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_INS_RCP_DYN:
+ GET_SET(be->tpe.v3.ins_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_INS_RCP_OFS:
+ GET_SET(be->tpe.v3.ins_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_INS_RCP_LEN:
+ GET_SET(be->tpe.v3.ins_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ins_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RCP
*/
@@ -220,6 +380,102 @@ int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v3_rpl_v4_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RCP_DYN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_RPL_RCP_OFS:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_RPL_RCP_LEN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].len, value);
+ break;
+
+ case HW_TPE_RPL_RCP_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_RCP_EXT_PRIO:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ext_prio, value);
+ break;
+
+ case HW_TPE_RPL_RCP_ETH_TYPE_WR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].eth_type_wr, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_EXT
*/
@@ -237,6 +493,86 @@ int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_ext_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_ext_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_ext[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_ext_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value, be->tpe.nb_rpl_ext_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_EXT_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_ext[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_EXT_META_RPL_LEN:
+ GET_SET(be->tpe.v3.rpl_ext[index].meta_rpl_len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_ext_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RPL
*/
@@ -254,6 +590,89 @@ int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rpl_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rpl_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rpl[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_rpl_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value, be->tpe.nb_rpl_depth);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RPL_VALUE:
+ if (get)
+ memcpy(value, be->tpe.v3.rpl_rpl[index].value,
+ sizeof(uint32_t) * 4);
+
+ else
+ memcpy(be->tpe.v3.rpl_rpl[index].value, value,
+ sizeof(uint32_t) * 4);
+
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_tpe_rpl_rpl_mod(be, field, index, value, 0);
+}
+
/*
* CPY_RCP
*/
@@ -273,6 +692,96 @@ int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_cpy_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_cpy_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ const uint32_t cpy_size = be->tpe.nb_cpy_writers * be->tpe.nb_rcp_categories;
+
+ if (index >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.cpy_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_cpy_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value, cpy_size);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CPY_RCP_READER_SELECT:
+ GET_SET(be->tpe.v3.cpy_rcp[index].reader_select, value);
+ break;
+
+ case HW_TPE_CPY_RCP_DYN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_CPY_RCP_OFS:
+ GET_SET(be->tpe.v3.cpy_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_CPY_RCP_LEN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_cpy_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* HFU_RCP
*/
@@ -290,6 +799,166 @@ int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_hfu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_hfu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.hfu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_hfu_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_outer_l4_len, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_hfu_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* CSU_RCP
*/
@@ -306,3 +975,91 @@ int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_csu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+
+static int hw_mod_tpe_csu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.csu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_csu_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol4_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il4_cmd, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_csu_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 068c890b45..dec96fce85 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,6 +30,17 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_tpe {
+ struct hw_db_inline_tpe_data data;
+ int ref;
+ } *tpe;
+
+ struct hw_db_inline_resource_db_tpe_ext {
+ struct hw_db_inline_tpe_ext_data data;
+ int replace_ram_idx;
+ int ref;
+ } *tpe_ext;
+
struct hw_db_inline_resource_db_hsh {
struct hw_db_inline_hsh_data data;
int ref;
@@ -38,6 +49,8 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_tpe;
+ uint32_t nb_tpe_ext;
uint32_t nb_hsh;
/* Items */
@@ -101,6 +114,22 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_tpe = ndev->be.tpe.nb_rcp_categories;
+ db->tpe = calloc(db->nb_tpe, sizeof(struct hw_db_inline_resource_db_tpe));
+
+ if (db->tpe == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_tpe_ext = ndev->be.tpe.nb_rpl_ext_categories;
+ db->tpe_ext = calloc(db->nb_tpe_ext, sizeof(struct hw_db_inline_resource_db_tpe_ext));
+
+ if (db->tpe_ext == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -154,6 +183,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->tpe);
+ free(db->tpe_ext);
free(db->hsh);
free(db->cat);
@@ -195,6 +226,15 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_TPE:
+ hw_db_inline_tpe_deref(ndev, db_handle, *(struct hw_db_tpe_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ hw_db_inline_tpe_ext_deref(ndev, db_handle,
+ *(struct hw_db_tpe_ext_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -240,6 +280,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_SLC_LR:
return &db->slc_lr[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_TPE:
+ return &db->tpe[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ return &db->tpe_ext[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -652,6 +698,333 @@ void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
}
}
+/******************************************************************************/
+/* TPE */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_compare(const struct hw_db_inline_tpe_data *data1,
+ const struct hw_db_inline_tpe_data *data2)
+{
+ for (int i = 0; i < 6; ++i)
+ if (data1->writer[i].en != data2->writer[i].en ||
+ data1->writer[i].reader_select != data2->writer[i].reader_select ||
+ data1->writer[i].dyn != data2->writer[i].dyn ||
+ data1->writer[i].ofs != data2->writer[i].ofs ||
+ data1->writer[i].len != data2->writer[i].len)
+ return 0;
+
+ return data1->insert_len == data2->insert_len && data1->new_outer == data2->new_outer &&
+ data1->calc_eth_type_from_inner_ip == data2->calc_eth_type_from_inner_ip &&
+ data1->ttl_en == data2->ttl_en && data1->ttl_dyn == data2->ttl_dyn &&
+ data1->ttl_ofs == data2->ttl_ofs && data1->len_a_en == data2->len_a_en &&
+ data1->len_a_pos_dyn == data2->len_a_pos_dyn &&
+ data1->len_a_pos_ofs == data2->len_a_pos_ofs &&
+ data1->len_a_add_dyn == data2->len_a_add_dyn &&
+ data1->len_a_add_ofs == data2->len_a_add_ofs &&
+ data1->len_a_sub_dyn == data2->len_a_sub_dyn &&
+ data1->len_b_en == data2->len_b_en &&
+ data1->len_b_pos_dyn == data2->len_b_pos_dyn &&
+ data1->len_b_pos_ofs == data2->len_b_pos_ofs &&
+ data1->len_b_add_dyn == data2->len_b_add_dyn &&
+ data1->len_b_add_ofs == data2->len_b_add_ofs &&
+ data1->len_b_sub_dyn == data2->len_b_sub_dyn &&
+ data1->len_c_en == data2->len_c_en &&
+ data1->len_c_pos_dyn == data2->len_c_pos_dyn &&
+ data1->len_c_pos_ofs == data2->len_c_pos_ofs &&
+ data1->len_c_add_dyn == data2->len_c_add_dyn &&
+ data1->len_c_add_ofs == data2->len_c_add_ofs &&
+ data1->len_c_sub_dyn == data2->len_c_sub_dyn;
+}
+
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE;
+
+ for (uint32_t i = 1; i < db->nb_tpe; ++i) {
+ int ref = db->tpe[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_compare(data, &db->tpe[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe[idx.ids].ref = 1;
+ memcpy(&db->tpe[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_data));
+
+ if (data->insert_len > 0) {
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_RPP_RCP_EXP, idx.ids, data->insert_len);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_RPL_PTR, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_EXT_PRIO, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_ETH_TYPE_WR, idx.ids,
+ data->calc_eth_type_from_inner_ip);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+ }
+
+ for (uint32_t i = 0; i < 6; ++i) {
+ if (data->writer[i].en) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i,
+ data->writer[i].reader_select);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, data->writer[i].dyn);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, data->writer[i].ofs);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, data->writer[i].len);
+
+ } else {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, 0);
+ }
+
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_WR, idx.ids, data->len_a_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN, idx.ids,
+ data->new_outer);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_DYN, idx.ids,
+ data->len_a_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_OFS, idx.ids,
+ data->len_a_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_DYN, idx.ids,
+ data->len_a_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_OFS, idx.ids,
+ data->len_a_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_SUB_DYN, idx.ids,
+ data->len_a_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_WR, idx.ids, data->len_b_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_DYN, idx.ids,
+ data->len_b_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_OFS, idx.ids,
+ data->len_b_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_DYN, idx.ids,
+ data->len_b_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_OFS, idx.ids,
+ data->len_b_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_SUB_DYN, idx.ids,
+ data->len_b_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_WR, idx.ids, data->len_c_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_DYN, idx.ids,
+ data->len_c_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_OFS, idx.ids,
+ data->len_c_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_DYN, idx.ids,
+ data->len_c_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_OFS, idx.ids,
+ data->len_c_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_SUB_DYN, idx.ids,
+ data->len_c_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_WR, idx.ids, data->ttl_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_DYN, idx.ids, data->ttl_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_OFS, idx.ids, data->ttl_ofs);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe[idx.ids].ref -= 1;
+
+ if (db->tpe[idx.ids].ref <= 0) {
+ for (uint32_t i = 0; i < 6; ++i) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_PRESET_ALL,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->tpe[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_data));
+ db->tpe[idx.ids].ref = 0;
+ }
+}
+
+/******************************************************************************/
+/* TPE_EXT */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_ext_compare(const struct hw_db_inline_tpe_ext_data *data1,
+ const struct hw_db_inline_tpe_ext_data *data2)
+{
+ return data1->size == data2->size &&
+ memcmp(data1->hdr8, data2->hdr8, HW_DB_INLINE_MAX_ENCAP_SIZE) == 0;
+}
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_ext_idx idx = { .raw = 0 };
+ int rpl_rpl_length = ((int)data->size + 15) / 16;
+ int found = 0, rpl_rpl_index = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE_EXT;
+
+ if (data->size > HW_DB_INLINE_MAX_ENCAP_SIZE) {
+ idx.error = 1;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_tpe_ext; ++i) {
+ int ref = db->tpe_ext[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_ext_compare(data, &db->tpe_ext[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ext_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ rpl_rpl_index = flow_nic_alloc_resource_config(ndev, RES_TPE_RPL, rpl_rpl_length, 1);
+
+ if (rpl_rpl_index < 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe_ext[idx.ids].ref = 1;
+ db->tpe_ext[idx.ids].replace_ram_idx = rpl_rpl_index;
+ memcpy(&db->tpe_ext[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_ext_data));
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_RPL_PTR, idx.ids, rpl_rpl_index);
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_META_RPL_LEN, idx.ids, data->size);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_data[4];
+ memcpy(rpl_data, data->hdr32 + i * 4, sizeof(rpl_data));
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_data);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe_ext[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe_ext[idx.ids].ref -= 1;
+
+ if (db->tpe_ext[idx.ids].ref <= 0) {
+ const int rpl_rpl_length = ((int)db->tpe_ext[idx.ids].data.size + 15) / 16;
+ const int rpl_rpl_index = db->tpe_ext[idx.ids].replace_ram_idx;
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_zero[] = { 0, 0, 0, 0 };
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_zero);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, rpl_rpl_index + i);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ memset(&db->tpe_ext[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_ext_data));
+ db->tpe_ext[idx.ids].ref = 0;
+ }
+}
+
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c97bdef1b7..18d959307e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -52,6 +52,60 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_inline_tpe_data {
+ uint32_t insert_len : 16;
+ uint32_t new_outer : 1;
+ uint32_t calc_eth_type_from_inner_ip : 1;
+ uint32_t ttl_en : 1;
+ uint32_t ttl_dyn : 5;
+ uint32_t ttl_ofs : 8;
+
+ struct {
+ uint32_t en : 1;
+ uint32_t reader_select : 3;
+ uint32_t dyn : 5;
+ uint32_t ofs : 14;
+ uint32_t len : 5;
+ uint32_t padding : 4;
+ } writer[6];
+
+ uint32_t len_a_en : 1;
+ uint32_t len_a_pos_dyn : 5;
+ uint32_t len_a_pos_ofs : 8;
+ uint32_t len_a_add_dyn : 5;
+ uint32_t len_a_add_ofs : 8;
+ uint32_t len_a_sub_dyn : 5;
+
+ uint32_t len_b_en : 1;
+ uint32_t len_b_pos_dyn : 5;
+ uint32_t len_b_pos_ofs : 8;
+ uint32_t len_b_add_dyn : 5;
+ uint32_t len_b_add_ofs : 8;
+ uint32_t len_b_sub_dyn : 5;
+
+ uint32_t len_c_en : 1;
+ uint32_t len_c_pos_dyn : 5;
+ uint32_t len_c_pos_ofs : 8;
+ uint32_t len_c_add_dyn : 5;
+ uint32_t len_c_add_ofs : 8;
+ uint32_t len_c_sub_dyn : 5;
+};
+
+struct hw_db_inline_tpe_ext_data {
+ uint32_t size;
+ union {
+ uint8_t hdr8[HW_DB_INLINE_MAX_ENCAP_SIZE];
+ uint32_t hdr32[(HW_DB_INLINE_MAX_ENCAP_SIZE + 3) / 4];
+ };
+};
+
+struct hw_db_tpe_idx {
+ HW_DB_IDX;
+};
+struct hw_db_tpe_ext_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -70,6 +124,9 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_TPE,
+ HW_DB_IDX_TYPE_TPE_EXT,
+
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
@@ -138,6 +195,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
};
@@ -181,6 +239,18 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data);
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data);
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+
struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_hsh_data *data);
void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index ebdf68385e..35ecea28b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -18,6 +18,8 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
@@ -2420,6 +2422,92 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
}
}
+ /* Setup TPE EXT */
+ if (fd->tun_hdr.len > 0) {
+ assert(fd->tun_hdr.len <= HW_DB_INLINE_MAX_ENCAP_SIZE);
+
+ struct hw_db_inline_tpe_ext_data tpe_ext_data = {
+ .size = fd->tun_hdr.len,
+ };
+
+ memset(tpe_ext_data.hdr8, 0x0, HW_DB_INLINE_MAX_ENCAP_SIZE);
+ memcpy(tpe_ext_data.hdr8, fd->tun_hdr.d.hdr8, (fd->tun_hdr.len + 15) & ~15);
+
+ struct hw_db_tpe_ext_idx tpe_ext_idx =
+ hw_db_inline_tpe_ext_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_ext_data);
+ local_idxs[(*local_idx_counter)++] = tpe_ext_idx.raw;
+
+ if (tpe_ext_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE EXT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_rpl_ext_ptr)
+ *flm_rpl_ext_ptr = tpe_ext_idx.ids;
+ }
+
+ /* Setup TPE */
+ assert(fd->modify_field_count <= 6);
+
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip =
+ !fd->tun_hdr.new_outer && fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ tpe_data.writer[i].en = 1;
+ tpe_data.writer[i].reader_select = fd->modify_field[i].select;
+ tpe_data.writer[i].dyn = fd->modify_field[i].dyn;
+ tpe_data.writer[i].ofs = fd->modify_field[i].ofs;
+ tpe_data.writer[i].len = fd->modify_field[i].len;
+ }
+
+ if (fd->tun_hdr.new_outer) {
+ const int fcs_length = 4;
+
+ /* L4 length */
+ tpe_data.len_a_en = 1;
+ tpe_data.len_a_pos_dyn = DYN_L4;
+ tpe_data.len_a_pos_ofs = 4;
+ tpe_data.len_a_add_dyn = 18;
+ tpe_data.len_a_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_a_sub_dyn = DYN_L4;
+
+ /* L3 length */
+ tpe_data.len_b_en = 1;
+ tpe_data.len_b_pos_dyn = DYN_L3;
+ tpe_data.len_b_pos_ofs = fd->tun_hdr.ip_version == 4 ? 2 : 4;
+ tpe_data.len_b_add_dyn = 18;
+ tpe_data.len_b_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_b_sub_dyn = DYN_L3;
+
+ /* GTP length */
+ tpe_data.len_c_en = 1;
+ tpe_data.len_c_pos_dyn = DYN_L4_PAYLOAD;
+ tpe_data.len_c_pos_ofs = 2;
+ tpe_data.len_c_add_dyn = 18;
+ tpe_data.len_c_add_ofs = (uint32_t)(-8 - fcs_length) & 0xff;
+ tpe_data.len_c_sub_dyn = DYN_L4_PAYLOAD;
+ }
+
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle, &tpe_data);
+
+ local_idxs[(*local_idx_counter)++] = tpe_idx.raw;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
return 0;
}
@@ -2540,6 +2628,30 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup TPE */
+ if (fd->ttl_sub_enable) {
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip = !fd->tun_hdr.new_outer &&
+ fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_data);
+ fh->db_idxs[fh->db_idx_counter++] = tpe_idx.raw;
+ action_set_data.tpe = tpe_idx;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
}
/* Setup CAT */
@@ -2848,6 +2960,16 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (!ndev->flow_mgnt_prepared) {
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* KM Flow Type 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_mark_resource_used(ndev, RES_KM_CATEGORY, 0);
+
+ /* Reserved FLM Flow Types */
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_MISS_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_UNHANDLED_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_RCP, 0);
/* COT is locked to CFN. Don't set color for CFN 0 */
hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
@@ -2873,8 +2995,11 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
- /* SLC LR index 0 is reserved */
+ /* SLC LR & TPE index 0 were reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_EXT, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RPL, 0);
/* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 33/73] net/ntnic: add FLM module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (31 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 32/73] net/ntnic: add TPE module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
` (39 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 42 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 190 +++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 257 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 33 +++
.../profile_inline/flow_api_profile_inline.c | 224 ++++++++++++++-
.../flow_api_profile_inline_config.h | 58 ++++
drivers/net/ntnic/ntutil/nt_util.h | 8 +
8 files changed, 1042 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index e16dcd478f..de662c4ed1 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -367,6 +367,18 @@ int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
/* KCE/KCS/FTE KM */
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -374,6 +386,18 @@ int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
enum km_flm_if_select_e if_num, int index, uint32_t *value);
/* KCE/KCS/FTE FLM */
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -384,10 +408,14 @@ int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
@@ -638,7 +666,21 @@ int hw_mod_flm_reset(struct flow_api_backend_s *be);
int hw_mod_flm_control_flush(struct flow_api_backend_s *be);
int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+int hw_mod_flm_status_update(struct flow_api_backend_s *be);
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index 9164ec1ae0..985c821312 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -902,6 +902,95 @@ static int hw_mod_cat_kce_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kce_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kce_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v18.kce[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v21.kce[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* KCS
*/
@@ -925,6 +1014,95 @@ static int hw_mod_cat_kcs_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kcs_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kcs_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v18.kcs[index].category, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v21.kcs[index].category[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* FTE
*/
@@ -1094,6 +1272,12 @@ int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cte_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -1154,6 +1338,12 @@ int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cts_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 8c1f3f2d96..f5eaea7c4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -293,11 +293,268 @@ int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, u
return hw_mod_flm_control_mod(be, field, &value, 0);
}
+int hw_mod_flm_status_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_status_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_status_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STATUS_CALIB_SUCCESS:
+ GET_SET(be->flm.v25.status->calib_success, value);
+ break;
+
+ case HW_FLM_STATUS_CALIB_FAIL:
+ GET_SET(be->flm.v25.status->calib_fail, value);
+ break;
+
+ case HW_FLM_STATUS_INITDONE:
+ GET_SET(be->flm.v25.status->initdone, value);
+ break;
+
+ case HW_FLM_STATUS_IDLE:
+ GET_SET(be->flm.v25.status->idle, value);
+ break;
+
+ case HW_FLM_STATUS_CRITICAL:
+ GET_SET(be->flm.v25.status->critical, value);
+ break;
+
+ case HW_FLM_STATUS_PANIC:
+ GET_SET(be->flm.v25.status->panic, value);
+ break;
+
+ case HW_FLM_STATUS_CRCERR:
+ GET_SET(be->flm.v25.status->crcerr, value);
+ break;
+
+ case HW_FLM_STATUS_EFT_BP:
+ GET_SET(be->flm.v25.status->eft_bp, value);
+ break;
+
+ case HW_FLM_STATUS_CACHE_BUFFER_CRITICAL:
+ GET_SET(be->flm.v25.status->cache_buf_critical, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_status_mod(be, field, value, 1);
+}
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be)
{
return be->iface->flm_scan_flush(be->be_dev, &be->flm);
}
+static int hw_mod_flm_scan_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCAN_I:
+ GET_SET(be->flm.v25.scan->i, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_scan_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_load_bin_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_load_bin_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_LOAD_BIN:
+ GET_SET(be->flm.v25.load_bin->bin, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_load_bin_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_prio_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_prio_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PRIO_LIMIT0:
+ GET_SET(be->flm.v25.prio->limit0, value);
+ break;
+
+ case HW_FLM_PRIO_FT0:
+ GET_SET(be->flm.v25.prio->ft0, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT1:
+ GET_SET(be->flm.v25.prio->limit1, value);
+ break;
+
+ case HW_FLM_PRIO_FT1:
+ GET_SET(be->flm.v25.prio->ft1, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT2:
+ GET_SET(be->flm.v25.prio->limit2, value);
+ break;
+
+ case HW_FLM_PRIO_FT2:
+ GET_SET(be->flm.v25.prio->ft2, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT3:
+ GET_SET(be->flm.v25.prio->limit3, value);
+ break;
+
+ case HW_FLM_PRIO_FT3:
+ GET_SET(be->flm.v25.prio->ft3, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_prio_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count)
+{
+ if (count == ALL_ENTRIES)
+ count = be->flm.nb_pst_profiles;
+
+ if ((unsigned int)(start_idx + count) > be->flm.nb_pst_profiles) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ return be->iface->flm_pst_flush(be->be_dev, &be->flm, start_idx, count);
+}
+
+static int hw_mod_flm_pst_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.pst[index], (uint8_t)*value,
+ sizeof(struct flm_v25_pst_s));
+ break;
+
+ case HW_FLM_PST_BP:
+ GET_SET(be->flm.v25.pst[index].bp, value);
+ break;
+
+ case HW_FLM_PST_PP:
+ GET_SET(be->flm.v25.pst[index].pp, value);
+ break;
+
+ case HW_FLM_PST_TP:
+ GET_SET(be->flm.v25.pst[index].tp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_pst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index dec96fce85..61492090ce 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,14 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_FT_LOOKUP_KEY_A 0
+
+#define HW_DB_FT_TYPE_KM 1
+#define HW_DB_FT_LOOKUP_KEY_A 0
+#define HW_DB_FT_LOOKUP_KEY_C 2
+
+#define HW_DB_FT_TYPE_FLM 0
+#define HW_DB_FT_TYPE_KM 1
/******************************************************************************/
/* Handle */
/******************************************************************************/
@@ -59,6 +67,23 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_resource_db_flm_ft {
+ struct hw_db_inline_flm_ft_data data;
+ struct hw_db_flm_ft idx;
+ int ref;
+ } *ft;
+
+ struct hw_db_inline_resource_db_flm_match_set {
+ struct hw_db_match_set_idx idx;
+ int ref;
+ } *match_set;
+
+ struct hw_db_inline_resource_db_flm_cfn_map {
+ int cfn_idx;
+ } *cfn_map;
+ } *flm;
+
struct hw_db_inline_resource_db_km_rcp {
struct hw_db_inline_km_rcp_data data;
int ref;
@@ -70,6 +95,7 @@ struct hw_db_inline_resource_db {
} *km;
uint32_t nb_cat;
+ uint32_t nb_flm_ft;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -173,6 +199,13 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
*db_handle = db;
+
+ /* Preset data */
+
+ db->flm[0].ft[1].idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ db->flm[0].ft[1].idx.id1 = 1;
+ db->flm[0].ft[1].ref = 1;
+
return 0;
}
@@ -235,6 +268,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ hw_db_inline_flm_ft_deref(ndev, db_handle,
+ *(struct hw_db_flm_ft *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -286,6 +324,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -307,6 +348,61 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
/* Filter */
/******************************************************************************/
+/*
+ * lookup refers to key A/B/C/D, and can have values 0, 1, 2, and 3.
+ */
+static void hw_db_set_ft(struct flow_nic_dev *ndev, int type, int cfn_index, int lookup,
+ int flow_type, int enable)
+{
+ (void)type;
+ (void)enable;
+
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index = (8 * flow_type + cfn_index / cat_funcs) * max_lookups + lookup;
+ int fte_field = cfn_index % cat_funcs;
+
+ uint32_t current_bm = 0;
+ uint32_t fte_field_bm = 1 << fte_field;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t final_bm = enable ? (fte_field_bm | current_bm) : (~fte_field_bm & current_bm);
+
+ if (current_bm != final_bm) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
/*
* Setup a filter to match:
* All packets in CFN checks
@@ -348,6 +444,17 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
return -1;
+ /* KM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match FT=ft_argument for look-up C */
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft, 1);
+
/* Make all CFN checks TRUE */
if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
return -1;
@@ -1252,6 +1359,133 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+/******************************************************************************/
+/* FLM FT */
+/******************************************************************************/
+
+static int hw_db_inline_flm_ft_compare(const struct hw_db_inline_flm_ft_data *data1,
+ const struct hw_db_inline_flm_ft_data *data2)
+{
+ return data1->is_group_zero == data2->is_group_zero && data1->jump == data2->jump &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ if (data->is_group_zero) {
+ idx.error = 1;
+ return idx;
+ }
+
+ if (flm_rcp->ft[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->group];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ /* RCP 0 always uses FT 1; i.e. use unhandled FT for disabled RCP */
+ if (data->group == 0) {
+ idx.id1 = 1;
+ return idx;
+ }
+
+ if (data->is_group_zero) {
+ idx.id3 = 1;
+ return idx;
+ }
+
+ /* FLM_FT records 0, 1 and last (15) are reserved */
+ /* NOTE: RES_FLM_FLOW_TYPE resource is global and it cannot be used in _add() and _deref()
+ * to track usage of FLM_FT recipes which are group specific.
+ */
+ for (uint32_t i = 2; i < db->nb_flm_ft; ++i) {
+ if (!found && flm_rcp->ft[i].ref <= 0 &&
+ !flow_nic_is_resource_used(ndev, RES_FLM_FLOW_TYPE, i)) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (flm_rcp->ft[i].ref > 0 &&
+ hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error && idx.id3 == 0)
+ db->flm[idx.id2].ft[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+
+ if (idx.error || idx.id2 == 0 || idx.id3 > 0)
+ return;
+
+ flm_rcp = &db->flm[idx.id2];
+
+ flm_rcp->ft[idx.id1].ref -= 1;
+
+ if (flm_rcp->ft[idx.id1].ref > 0)
+ return;
+
+ flm_rcp->ft[idx.id1].ref = 0;
+ memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
+}
/******************************************************************************/
/* HSH */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 18d959307e..a520ae1769 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_match_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_action_set_idx {
HW_DB_IDX;
};
@@ -106,6 +110,13 @@ struct hw_db_tpe_ext_idx {
HW_DB_IDX;
};
+struct hw_db_flm_idx {
+ HW_DB_IDX;
+};
+struct hw_db_flm_ft {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -128,6 +139,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE_EXT,
HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -211,6 +223,17 @@ struct hw_db_inline_km_ft_data {
struct hw_db_action_set_idx action_set;
};
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -277,6 +300,16 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx);
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_ft idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 35ecea28b6..5ad2ceb4ca 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -11,6 +11,7 @@
#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
#include "stream_binary_flow_api.h"
@@ -47,6 +48,128 @@ static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
return -1;
}
+/*
+ * Flow Matcher functionality
+ */
+
+static int flm_sdram_calibrate(struct flow_nic_dev *ndev)
+{
+ int success = 0;
+ uint32_t fail_value = 0;
+ uint32_t value = 0;
+
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_PRESET_ALL, 0x0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_SPLIT_SDRAM_USAGE, 0x10);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for ddr4 calibration/init done */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_SUCCESS, &value);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_FAIL, &fail_value);
+
+ if (value & 0x80000000) {
+ success = 1;
+ break;
+ }
+
+ if (fail_value != 0)
+ break;
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - SDRAM calibration failed");
+ NT_LOG(ERR, FILTER,
+ "Calibration status: success 0x%08" PRIx32 " - fail 0x%08" PRIx32,
+ value, fail_value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
+{
+ int success = 0;
+
+ /*
+ * Make sure no lookup is performed during init, i.e.
+ * disable every category and disable FLM
+ */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for FLM to enter Idle state */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_IDLE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - Never idle");
+ return -1;
+ }
+
+ success = 0;
+
+ /* Start SDRAM initialization */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x1);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_INITDONE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER,
+ "FLM initialization failed - SDRAM initialization incomplete");
+ return -1;
+ }
+
+ /* Set the INIT value back to zero to clear the bit in the SW register cache */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Enable FLM */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, enable);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ int nb_rpp_per_ps = ndev->be.flm.nb_rpp_clock_in_ps;
+ int nb_load_aps_max = ndev->be.flm.nb_load_aps_max;
+ uint32_t scan_i_value = 0;
+
+ if (NTNIC_SCANNER_LOAD > 0) {
+ scan_i_value = (1 / (nb_rpp_per_ps * 0.000000000001)) /
+ (nb_load_aps_max * NTNIC_SCANNER_LOAD);
+ }
+
+ hw_mod_flm_scan_set(&ndev->be, HW_FLM_SCAN_I, scan_i_value);
+ hw_mod_flm_scan_flush(&ndev->be);
+
+ return 0;
+}
+
+
+
struct flm_flow_key_def_s {
union {
struct {
@@ -2355,11 +2478,11 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data,
- uint32_t group __rte_unused,
+ uint32_t group,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
- uint16_t *flm_rpl_ext_ptr __rte_unused,
- uint32_t *flm_ft __rte_unused,
+ uint16_t *flm_rpl_ext_ptr,
+ uint32_t *flm_ft,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
@@ -2508,6 +2631,25 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 0,
+ .group = group,
+ };
+ struct hw_db_flm_ft flm_ft_idx = empty_pattern
+ ? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
+ : hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ local_idxs[(*local_idx_counter)++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_ft)
+ *flm_ft = flm_ft_idx.id1;
+
return 0;
}
@@ -2515,7 +2657,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
- uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t num_dest_port, uint32_t num_queues,
uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
@@ -2809,6 +2951,21 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 1,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ };
+ struct hw_db_flm_ft flm_ft_idx =
+ hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -3029,6 +3186,63 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
NT_VIOLATING_MBR_QSL) < 0)
goto err_exit0;
+ /* FLM */
+ if (flm_sdram_calibrate(ndev) < 0)
+ goto err_exit0;
+
+ if (flm_sdram_reset(ndev, 1) < 0)
+ goto err_exit0;
+
+ /* Learn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LDS, 0);
+ /* Learn fail status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LFS, 1);
+ /* Learn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LIS, 1);
+ /* Unlearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UDS, 0);
+ /* Unlearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UIS, 0);
+ /* Relearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RDS, 0);
+ /* Relearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RIS, 0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RBL, 4);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Set the sliding windows size for flm load */
+ uint32_t bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) /
+ (32ULL * ndev->be.flm.nb_rpp_clock_in_ps)) -
+ 1ULL);
+ hw_mod_flm_load_bin_set(&ndev->be, HW_FLM_LOAD_BIN, bin);
+ hw_mod_flm_load_bin_flush(&ndev->be);
+
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT0,
+ 0); /* Drop at 100% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT0, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT1,
+ 14); /* Drop at 87,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT1, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT2,
+ 10); /* Drop at 62,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT2, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT3,
+ 6); /* Drop at 37,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT3, 1);
+ hw_mod_flm_prio_flush(&ndev->be);
+
+ /* TODO How to set and use these limits */
+ for (uint32_t i = 0; i < ndev->be.flm.nb_pst_profiles; ++i) {
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_BP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_PP, i,
+ NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_TP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT);
+ }
+
+ hw_mod_flm_pst_flush(&ndev->be, 0, ALL_ENTRIES);
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -3057,6 +3271,8 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
#endif
if (ndev->flow_mgnt_prepared) {
+ flm_sdram_reset(ndev, 0);
+
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
new file mode 100644
index 0000000000..8ba8b8f67a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -0,0 +1,58 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
+#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+
+/*
+ * Statistics are generated each time the byte counter crosses a limit.
+ * If BYTE_LIMIT is zero then the byte counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_LIMIT + 15) bytes
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(8 + 15) = 2^23 ~~ 8MB
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT 8
+
+/*
+ * Statistics are generated each time the packet counter crosses a limit.
+ * If PKT_LIMIT is zero then the packet counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(PKT_LIMIT + 11) pkts
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(5 + 11) = 2^16 pkts ~~ 64K pkts
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT 5
+
+/*
+ * Statistics are generated each time flow time (measured in ns) crosses a
+ * limit.
+ * If BYTE_TIMEOUT is zero then the flow time does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_TIMEOUT + 15) ns
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(23 + 15) = 2^38 ns ~~ 275 sec
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT 23
+
+/*
+ * This define sets the percentage of the full processing capacity
+ * being reserved for scan operations. The scanner is responsible
+ * for detecting aged out flows and meters with statistics timeout.
+ *
+ * A high scanner load percentage will make this detection more precise
+ * but will also give lower packet processing capacity.
+ *
+ * The percentage is given as a decimal number, e.g. 0.01 for 1%, which is the recommended value.
+ */
+#define NTNIC_SCANNER_LOAD 0.01
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 71ecd6c68c..a482fb43ad 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -16,6 +16,14 @@
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
+/*
+ * Windows size in seconds for measuring FLM load
+ * and Port load.
+ * The windows size must max be 3 min in order to
+ * prevent overflow.
+ */
+#define FLM_LOAD_WINDOWS_SIZE 2ULL
+
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
#define PCIIDENT_TO_BUSNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 8) & 0xFFU))
#define PCIIDENT_TO_DEVNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 3) & 0x1FU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 34/73] net/ntnic: add flm rcp module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (32 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 33/73] net/ntnic: add FLM module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
` (38 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 133 ++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 +++++++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 20 ++
.../profile_inline/flow_api_profile_inline.c | 42 +++-
5 files changed, 390 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index de662c4ed1..13722c30a9 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -683,6 +683,10 @@ int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value);
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f5eaea7c4e..0a7e90c04f 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -579,3 +579,136 @@ int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int cou
}
return be->iface->flm_scrub_flush(be->be_dev, &be->flm, start_idx, count);
}
+
+static int hw_mod_flm_rcp_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.rcp[index], (uint8_t)*value,
+ sizeof(struct flm_v25_rcp_s));
+ break;
+
+ case HW_FLM_RCP_LOOKUP:
+ GET_SET(be->flm.v25.rcp[index].lookup, value);
+ break;
+
+ case HW_FLM_RCP_QW0_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW0_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_FLM_RCP_QW0_SEL:
+ GET_SET(be->flm.v25.rcp[index].qw0_sel, value);
+ break;
+
+ case HW_FLM_RCP_QW4_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW4_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw8_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW8_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw8_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_SEL:
+ GET_SET(be->flm.v25.rcp[index].sw8_sel, value);
+ break;
+
+ case HW_FLM_RCP_SW9_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw9_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW9_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw9_ofs, value);
+ break;
+
+ case HW_FLM_RCP_MASK:
+ if (get) {
+ memcpy(value, be->flm.v25.rcp[index].mask,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+
+ } else {
+ memcpy(be->flm.v25.rcp[index].mask, value,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+ }
+
+ break;
+
+ case HW_FLM_RCP_KID:
+ GET_SET(be->flm.v25.rcp[index].kid, value);
+ break;
+
+ case HW_FLM_RCP_OPN:
+ GET_SET(be->flm.v25.rcp[index].opn, value);
+ break;
+
+ case HW_FLM_RCP_IPN:
+ GET_SET(be->flm.v25.rcp[index].ipn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_DYN:
+ GET_SET(be->flm.v25.rcp[index].byt_dyn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_OFS:
+ GET_SET(be->flm.v25.rcp[index].byt_ofs, value);
+ break;
+
+ case HW_FLM_RCP_TXPLM:
+ GET_SET(be->flm.v25.rcp[index].txplm, value);
+ break;
+
+ case HW_FLM_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->flm.v25.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value)
+{
+ if (field != HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, value, 0);
+}
+
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ if (field == HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 61492090ce..0ae058b91e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -68,6 +68,9 @@ struct hw_db_inline_resource_db {
} *cat;
struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_flm_rcp_data data;
+ int ref;
+
struct hw_db_inline_resource_db_flm_ft {
struct hw_db_inline_flm_ft_data data;
struct hw_db_flm_ft idx;
@@ -96,6 +99,7 @@ struct hw_db_inline_resource_db {
uint32_t nb_cat;
uint32_t nb_flm_ft;
+ uint32_t nb_flm_rcp;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -164,6 +168,42 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+
+ db->nb_flm_ft = ndev->be.cat.nb_flow_types;
+ db->nb_flm_rcp = ndev->be.flm.nb_categories;
+ db->flm = calloc(db->nb_flm_rcp, sizeof(struct hw_db_inline_resource_db_flm_rcp));
+
+ if (db->flm == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ db->flm[i].ft =
+ calloc(db->nb_flm_ft, sizeof(struct hw_db_inline_resource_db_flm_ft));
+
+ if (db->flm[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].match_set =
+ calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_flm_match_set));
+
+ if (db->flm[i].match_set == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].cfn_map = calloc(db->nb_cat * db->nb_flm_ft,
+ sizeof(struct hw_db_inline_resource_db_flm_cfn_map));
+
+ if (db->flm[i].cfn_map == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
db->nb_km_ft = ndev->be.cat.nb_flow_types;
db->nb_km_rcp = ndev->be.km.nb_categories;
db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
@@ -222,6 +262,16 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cat);
+ if (db->flm) {
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ free(db->flm[i].ft);
+ free(db->flm[i].match_set);
+ free(db->flm[i].cfn_map);
+ }
+
+ free(db->flm);
+ }
+
if (db->km) {
for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
free(db->km[i].ft);
@@ -268,6 +318,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ hw_db_inline_flm_deref(ndev, db_handle, *(struct hw_db_flm_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_FLM_FT:
hw_db_inline_flm_ft_deref(ndev, db_handle,
*(struct hw_db_flm_ft *)&idxs[i]);
@@ -324,6 +378,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ return &db->flm[idxs[i].id1].data;
+
case HW_DB_IDX_TYPE_FLM_FT:
return NULL; /* FTs can't be easily looked up */
@@ -481,6 +538,20 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
return 0;
}
+static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int flm_rcp)
+{
+ uint32_t flm_mask[10];
+ memset(flm_mask, 0xff, sizeof(flm_mask));
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, flm_rcp, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, flm_rcp, 1);
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, flm_rcp, flm_mask);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, flm_rcp, flm_rcp + 2);
+
+ hw_mod_flm_rcp_flush(&ndev->be, flm_rcp, 1);
+}
+
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1268,10 +1339,17 @@ void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_d
void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
{
(void)ndev;
- (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
if (idx.error)
return;
+
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0, sizeof(struct hw_db_inline_km_rcp_data));
+ db->flm[idx.id1].ref = 0;
+ }
}
/******************************************************************************/
@@ -1359,6 +1437,121 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* FLM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_flm_compare(const struct hw_db_inline_flm_rcp_data *data1,
+ const struct hw_db_inline_flm_rcp_data *data2)
+{
+ if (data1->qw0_dyn != data2->qw0_dyn || data1->qw0_ofs != data2->qw0_ofs ||
+ data1->qw4_dyn != data2->qw4_dyn || data1->qw4_ofs != data2->qw4_ofs ||
+ data1->sw8_dyn != data2->sw8_dyn || data1->sw8_ofs != data2->sw8_ofs ||
+ data1->sw9_dyn != data2->sw9_dyn || data1->sw9_ofs != data2->sw9_ofs ||
+ data1->outer_prot != data2->outer_prot || data1->inner_prot != data2->inner_prot) {
+ return 0;
+ }
+
+ for (int i = 0; i < 10; ++i)
+ if (data1->mask[i] != data2->mask[i])
+ return 0;
+
+ return 1;
+}
+
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_idx idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_RCP;
+ idx.id1 = group;
+
+ if (group == 0)
+ return idx;
+
+ if (db->flm[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_compare(data, &db->flm[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ref(ndev, db, idx);
+ return idx;
+ }
+
+ db->flm[idx.id1].ref = 1;
+ memcpy(&db->flm[idx.id1].data, data, sizeof(struct hw_db_inline_flm_rcp_data));
+
+ {
+ uint32_t flm_mask[10] = {
+ data->mask[0], /* SW9 */
+ data->mask[1], /* SW8 */
+ data->mask[5], data->mask[4], data->mask[3], data->mask[2], /* QW4 */
+ data->mask[9], data->mask[8], data->mask[7], data->mask[6], /* QW0 */
+ };
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, idx.id1, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, idx.id1, 1);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_DYN, idx.id1, data->qw0_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_OFS, idx.id1, data->qw0_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_DYN, idx.id1, data->qw4_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_OFS, idx.id1, data->qw4_ofs);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_DYN, idx.id1, data->sw8_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_OFS, idx.id1, data->sw8_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_DYN, idx.id1, data->sw9_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_OFS, idx.id1, data->sw9_ofs);
+
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, idx.id1, flm_mask);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, idx.id1, idx.id1 + 2);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_OPN, idx.id1, data->outer_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_IPN, idx.id1, data->inner_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_DYN, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_OFS, idx.id1, -20);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_TXPLM, idx.id1, UINT32_MAX);
+
+ hw_mod_flm_rcp_flush(&ndev->be, idx.id1, 1);
+ }
+
+ return idx;
+}
+
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->flm[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ if (idx.id1 > 0) {
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_flm_rcp_data));
+ db->flm[idx.id1].ref = 0;
+
+ hw_db_inline_setup_default_flm_rcp(ndev, idx.id1);
+ }
+ }
+}
+
/******************************************************************************/
/* FLM FT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a520ae1769..9820225ffa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -138,6 +138,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE,
HW_DB_IDX_TYPE_TPE_EXT,
+ HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
@@ -165,6 +166,22 @@ struct hw_db_inline_cat_data {
uint8_t ip_prot_tunnel;
};
+struct hw_db_inline_flm_rcp_data {
+ uint64_t qw0_dyn : 5;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 5;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 5;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 5;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_prot : 1;
+ uint64_t inner_prot : 1;
+ uint64_t padding : 10;
+
+ uint32_t mask[10];
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -300,7 +317,10 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group);
void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_flm_ft_data *data);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5ad2ceb4ca..719f5fcdec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -101,6 +101,11 @@ static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
hw_mod_flm_control_flush(&ndev->be);
+ for (uint32_t i = 1; i < ndev->be.flm.nb_categories; ++i)
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, i, 0x0);
+
+ hw_mod_flm_rcp_flush(&ndev->be, 1, ndev->be.flm.nb_categories - 1);
+
/* Wait for FLM to enter Idle state */
for (uint32_t i = 0; i < 1000000; ++i) {
uint32_t value = 0;
@@ -2658,8 +2663,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port, uint32_t num_queues,
- uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
- struct flm_flow_key_def_s *key_def __rte_unused)
+ uint32_t *packet_data, uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
@@ -2692,6 +2697,31 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
* Flow for group 1..32
*/
+ /* Setup FLM RCP */
+ struct hw_db_inline_flm_rcp_data flm_data = {
+ .qw0_dyn = key_def->qw0_dyn,
+ .qw0_ofs = key_def->qw0_ofs,
+ .qw4_dyn = key_def->qw4_dyn,
+ .qw4_ofs = key_def->qw4_ofs,
+ .sw8_dyn = key_def->sw8_dyn,
+ .sw8_ofs = key_def->sw8_ofs,
+ .sw9_dyn = key_def->sw9_dyn,
+ .sw9_ofs = key_def->sw9_ofs,
+ .outer_prot = key_def->outer_proto,
+ .inner_prot = key_def->inner_proto,
+ };
+ memcpy(flm_data.mask, packet_mask, sizeof(uint32_t) * 10);
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, &flm_data,
+ attr->group);
+ fh->db_idxs[fh->db_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup Actions */
uint16_t flm_rpl_ext_ptr = 0;
uint32_t flm_ft = 0;
@@ -2704,7 +2734,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
/* Program flow */
- convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ convert_fh_to_fh_flm(fh, packet_data, flm_idx.id1 + 2, flm_ft, flm_rpl_ext_ptr,
flm_scrub, attr->priority & 0x3);
flm_flow_programming(fh, NT_FLM_OP_LEARN);
@@ -3276,6 +3306,12 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, 0, 0);
+ hw_mod_flm_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
+ flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 35/73] net/ntnic: add learn flow queue handling
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (33 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
` (37 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements thread for handling flow learn queue
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 5 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 33 +++++++
.../flow_api/profile_inline/flm_lrn_queue.c | 42 +++++++++
.../flow_api/profile_inline/flm_lrn_queue.h | 11 +++
.../profile_inline/flow_api_profile_inline.c | 48 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 94 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 241 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 13722c30a9..17d5755634 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,11 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt);
+
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
struct hsh_func_s {
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8017aa4fc3..8ebdd98db0 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -14,6 +14,7 @@ typedef struct ntdrv_4ga_s {
char *p_drv_name;
volatile bool b_shutdown;
+ rte_thread_t flm_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 0a7e90c04f..f4c29b8bde 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,3 +712,36 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ int ret = 0;
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_LRN_DATA:
+ ret = be->iface->flm_lrn_data_flush(be->be_dev, &be->flm, value, records,
+ handled_records,
+ (sizeof(struct flm_v25_lrn_data_s) /
+ sizeof(uint32_t)),
+ inf_word_cnt, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
index ad7efafe08..6e77c28f93 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -13,8 +13,28 @@
#include "flm_lrn_queue.h"
+#define QUEUE_SIZE (1 << 13)
+
#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+void *flm_lrn_queue_create(void)
+{
+ static_assert((ELEM_SIZE & ~(size_t)3) == ELEM_SIZE, "FLM LEARN struct size");
+ struct rte_ring *q = rte_ring_create_elem("RFQ",
+ ELEM_SIZE,
+ QUEUE_SIZE,
+ SOCKET_ID_ANY,
+ RING_F_MP_HTS_ENQ | RING_F_SC_DEQ);
+ assert(q != NULL);
+ return q;
+}
+
+void flm_lrn_queue_free(void *q)
+{
+ if (q)
+ rte_ring_free(q);
+}
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q)
{
struct rte_ring_zc_data zcd;
@@ -26,3 +46,25 @@ void flm_lrn_queue_release_write_buffer(void *q)
{
rte_ring_enqueue_zc_elem_finish(q, 1);
}
+
+read_record flm_lrn_queue_get_read_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ read_record rr;
+
+ if (rte_ring_dequeue_zc_burst_elem_start(q, ELEM_SIZE, QUEUE_SIZE, &zcd, NULL) != 0) {
+ rr.num = zcd.n1;
+ rr.p = zcd.ptr1;
+
+ } else {
+ rr.num = 0;
+ rr.p = NULL;
+ }
+
+ return rr;
+}
+
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num)
+{
+ rte_ring_dequeue_zc_elem_finish(q, num);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
index 8cee0c8e78..40558f4201 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -8,7 +8,18 @@
#include <stdint.h>
+typedef struct read_record {
+ uint32_t *p;
+ uint32_t num;
+} read_record;
+
+void *flm_lrn_queue_create(void);
+void flm_lrn_queue_free(void *q);
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q);
void flm_lrn_queue_release_write_buffer(void *q);
+read_record flm_lrn_queue_get_read_buffer(void *q);
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num);
+
#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 719f5fcdec..0b8ac26b83 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -39,6 +39,48 @@
static void *flm_lrn_queue_arr;
+static void flm_setup_queues(void)
+{
+ flm_lrn_queue_arr = flm_lrn_queue_create();
+ assert(flm_lrn_queue_arr != NULL);
+}
+
+static void flm_free_queues(void)
+{
+ flm_lrn_queue_free(flm_lrn_queue_arr);
+}
+
+static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ read_record r = flm_lrn_queue_get_read_buffer(flm_lrn_queue_arr);
+
+ if (r.num) {
+ uint32_t handled_records = 0;
+
+ if (hw_mod_flm_lrn_data_set_flush(&dev->ndev->be, HW_FLM_FLOW_LRN_DATA, r.p, r.num,
+ &handled_records, inf_word_cnt, sta_word_cnt)) {
+ NT_LOG(ERR, FILTER, "Flow programming failed");
+
+ } else if (handled_records > 0) {
+ flm_lrn_queue_release_read_buffer(flm_lrn_queue_arr, handled_records);
+ }
+ }
+
+ return r.num;
+}
+
+static uint32_t flm_update(struct flow_eth_dev *dev)
+{
+ static uint32_t inf_word_cnt;
+ static uint32_t sta_word_cnt;
+
+ if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
+ return 1;
+
+ return inf_word_cnt + sta_word_cnt;
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -4219,6 +4261,12 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * NT Flow FLM Meter API
+ */
+ .flm_setup_queues = flm_setup_queues,
+ .flm_free_queues = flm_free_queues,
+ .flm_update = flm_update,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a509a8eb51..bfca8f28b1 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -24,6 +24,11 @@
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
+#define THREAD_JOIN(a) rte_thread_join(a, NULL)
+#define THREAD_FUNC static uint32_t
+#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
@@ -120,6 +125,16 @@ store_pdrv(struct drv_s *p_drv)
rte_spinlock_unlock(&hwlock);
}
+static void clear_pdrv(struct drv_s *p_drv)
+{
+ if (p_drv->adapter_no > NUM_ADAPTER_MAX)
+ return;
+
+ rte_spinlock_lock(&hwlock);
+ _g_p_drv[p_drv->adapter_no] = NULL;
+ rte_spinlock_unlock(&hwlock);
+}
+
static struct drv_s *
get_pdrv_from_pci(struct rte_pci_addr addr)
{
@@ -1240,6 +1255,13 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
static void
drv_deinit(struct drv_s *p_drv)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return;
+ }
+
const struct adapter_ops *adapter_ops = get_adapter_ops();
if (adapter_ops == NULL) {
@@ -1251,6 +1273,22 @@ drv_deinit(struct drv_s *p_drv)
return;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ fpga_info_t *fpga_info = &p_nt_drv->adapter_info.fpga_info;
+
+ /*
+ * Mark the global pdrv for cleared. Used by some threads to terminate.
+ * 1 second to give the threads a chance to see the termonation.
+ */
+ clear_pdrv(p_drv);
+ nt_os_wait_usec(1000000);
+
+ /* stop statistics threads */
+ p_drv->ntdrv.b_shutdown = true;
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ }
/* stop adapter */
adapter_ops->deinit(&p_nt_drv->adapter_info);
@@ -1359,6 +1397,43 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.promiscuous_enable = promiscuous_enable,
};
+/*
+ * Adapter flm stat thread
+ */
+THREAD_FUNC adapter_flm_update_thread_fn(void *context)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: profile_inline module uninitialized", __func__);
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct nt4ga_filter_s *p_nt4ga_filter = &p_adapter_info->nt4ga_filter;
+ struct flow_nic_dev *p_flow_nic_dev = p_nt4ga_filter->mp_flow_device;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: waiting for port configuration",
+ p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (p_flow_nic_dev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ struct flow_eth_dev *dev = p_flow_nic_dev->eth_base;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: begin", p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (!p_drv->ntdrv.b_shutdown)
+ if (profile_inline_ops->flm_update(dev) == 0)
+ nt_os_wait_usec(10);
+
+ NT_LOG(DBG, NTNIC, "%s: %s: end", p_adapter_info->mp_adapter_id_str, __func__);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1369,6 +1444,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* Return statement is not necessary here to allow traffic processing by SW */
}
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1597,6 +1679,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (profile_inline_ops != NULL && fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ profile_inline_ops->flm_setup_queues();
+ res = THREAD_CTRL_CREATE(&p_nt_drv->flm_thread, "ntnic-nt_flm_update_thr",
+ adapter_flm_update_thread_fn, (void *)p_drv);
+
+ if (res) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1069be2f85..27d6cbef01 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -256,6 +256,13 @@ struct profile_inline_ops {
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+
+ /*
+ * NT Flow FLM queue API
+ */
+ void (*flm_setup_queues)(void);
+ void (*flm_free_queues)(void);
+ uint32_t (*flm_update)(struct flow_eth_dev *dev);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 36/73] net/ntnic: match and action db attributes were added
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (34 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
` (36 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements match/action dereferencing
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_hw_db_inline.c | 795 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 35 +
.../profile_inline/flow_api_profile_inline.c | 55 ++
3 files changed, 885 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 0ae058b91e..52f85b65af 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,9 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_INLINE_ACTION_SET_NB 512
+#define HW_DB_INLINE_MATCH_SET_NB 512
+
#define HW_DB_FT_LOOKUP_KEY_A 0
#define HW_DB_FT_TYPE_KM 1
@@ -110,6 +113,20 @@ struct hw_db_inline_resource_db {
int cfn_hw;
int ref;
} *cfn;
+
+ uint32_t cfn_priority_counter;
+ uint32_t set_priority_counter;
+
+ struct hw_db_inline_resource_db_action_set {
+ struct hw_db_inline_action_set_data data;
+ int ref;
+ } action_set[HW_DB_INLINE_ACTION_SET_NB];
+
+ struct hw_db_inline_resource_db_match_set {
+ struct hw_db_inline_match_set_data data;
+ int ref;
+ uint32_t set_priority;
+ } match_set[HW_DB_INLINE_MATCH_SET_NB];
};
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
@@ -292,6 +309,16 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ hw_db_inline_match_set_deref(ndev, db_handle,
+ *(struct hw_db_match_set_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ hw_db_inline_action_set_deref(ndev, db_handle,
+ *(struct hw_db_action_set_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_CAT:
hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
break;
@@ -360,6 +387,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_NONE:
return NULL;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ return &db->match_set[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ return &db->action_set[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_CAT:
return &db->cat[idxs[i].ids].data;
@@ -552,6 +585,763 @@ static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int fl
}
+static void hw_db_copy_ft(struct flow_nic_dev *ndev, int type, int cfn_dst, int cfn_src,
+ int lookup, int flow_type)
+{
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index_dst = (8 * flow_type + cfn_dst / cat_funcs) * max_lookups + lookup;
+ int fte_field_dst = cfn_dst % cat_funcs;
+
+ int fte_index_src = (8 * flow_type + cfn_src / cat_funcs) * max_lookups + lookup;
+ int fte_field_src = cfn_src % cat_funcs;
+
+ uint32_t current_bm_dst = 0;
+ uint32_t current_bm_src = 0;
+ uint32_t fte_field_bm_dst = 1 << fte_field_dst;
+ uint32_t fte_field_bm_src = 1 << fte_field_src;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t enable = current_bm_src & fte_field_bm_src;
+ uint32_t final_bm_dst = enable ? (fte_field_bm_dst | current_bm_dst)
+ : (~fte_field_bm_dst & current_bm_dst);
+
+ if (current_bm_dst != final_bm_dst) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+
+static int hw_db_inline_filter_apply(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id,
+ struct hw_db_match_set_idx match_set_idx,
+ struct hw_db_flm_ft flm_ft_idx,
+ struct hw_db_action_set_idx action_set_idx)
+{
+ (void)match_set_idx;
+ (void)flm_ft_idx;
+
+ const struct hw_db_inline_match_set_data *match_set =
+ &db->match_set[match_set_idx.ids].data;
+ const struct hw_db_inline_cat_data *cat = &db->cat[match_set->cat.ids].data;
+
+ const int km_ft = match_set->km_ft.id1;
+ const int km_rcp = (int)db->km[match_set->km.id1].data.rcp;
+
+ const int flm_ft = flm_ft_idx.id1;
+ const int flm_rcp = flm_ft_idx.id2;
+
+ const struct hw_db_inline_action_set_data *action_set =
+ &db->action_set[action_set_idx.ids].data;
+ const struct hw_db_inline_cot_data *cot = &db->cot[action_set->cot.ids].data;
+
+ const int qsl_hw_id = action_set->qsl.ids;
+ const int slc_lr_hw_id = action_set->slc_lr.ids;
+ const int tpe_hw_id = action_set->tpe.ids;
+ const int hsh_hw_id = action_set->hsh.ids;
+
+ /* Setup default FLM RCP if needed */
+ if (flm_rcp > 0 && db->flm[flm_rcp].ref <= 0)
+ hw_db_inline_setup_default_flm_rcp(ndev, flm_rcp);
+
+ /* Setup CAT.CFN */
+ {
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x0);
+
+ /* Protocol checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_ISL, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_CFP, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MAC, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L2, cat_hw_id, 0, cat->ptc_mask_l2);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VNTAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VLAN, cat_hw_id, 0, cat->vlan_mask);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, cat->ptc_mask_l3);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_FRAG, cat_hw_id, 0,
+ cat->ptc_mask_frag);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_IP_PROT, cat_hw_id, 0, cat->ip_prot);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L4, cat_hw_id, 0, cat->ptc_mask_l4);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TUNNEL, cat_hw_id, 0,
+ cat->ptc_mask_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L2, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_VLAN, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L3, cat_hw_id, 0,
+ cat->ptc_mask_l3_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_FRAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_IP_PROT, cat_hw_id, 0,
+ cat->ip_prot_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L4, cat_hw_id, 0,
+ cat->ptc_mask_l4_tunnel);
+
+ /* Error checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_CV, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_FCS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TRUNC, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L3_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L4_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L3_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L4_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl_tunnel);
+
+ /* MAC port check */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_MAC_PORT, cat_hw_id, 0,
+ cat->mac_port_mask);
+
+ /* Pattern match checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMP, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_DCT, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_EXT_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMB, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_AND_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_OR_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_INV, cat_hw_id, 0, -1);
+
+ /* Length checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC_INV, cat_hw_id, 0, -1);
+
+ /* KM and FLM */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3);
+
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 0, cat_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 0, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 1, hsh_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 2, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 2,
+ slc_lr_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 5, tpe_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 5, 0);
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id,
+ 0x001 | 0x004 | (qsl_hw_id ? 0x008 : 0) |
+ (slc_lr_hw_id ? 0x020 : 0) | 0x040 |
+ (tpe_hw_id ? 0x400 : 0));
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ km_rcp);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ flm_rcp);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, flm_ft, 1);
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COLOR, cat_hw_id, cot->frag_rcp << 10);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_KM, cat_hw_id,
+ cot->matcher_color_contrib);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ return 0;
+}
+
+static void hw_db_inline_filter_clear(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id)
+{
+ /* Setup CAT.CFN */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < 6; ++i) {
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + i, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + i, 0);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0);
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft,
+ 0);
+ }
+ }
+
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+}
+
+static void hw_db_inline_filter_copy(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db, int cfn_dst, int cfn_src)
+{
+ uint32_t val = 0;
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_COPY_FROM, cfn_dst, 0, cfn_src);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < offset; ++i) {
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_dst + i, val);
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_dst + i, val);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cfn_dst, offset);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_get(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_src, &val);
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_dst, val);
+ hw_mod_cat_cte_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_km_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_KM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_flm_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_C, ft);
+ }
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COPY_FROM, cfn_dst, cfn_src);
+ hw_mod_cat_cot_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+}
+
+/*
+ * Algorithm for moving CFN entries to make space with respect of priority.
+ * The algorithm will make the fewest possible moves to fit a new CFN entry.
+ */
+static int hw_db_inline_alloc_prioritized_cfn(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ struct hw_db_match_set_idx match_set_idx)
+{
+ const struct hw_db_inline_resource_db_match_set *match_set =
+ &db->match_set[match_set_idx.ids];
+
+ uint64_t priority = ((uint64_t)(match_set->data.priority & 0xff) << 56) |
+ ((uint64_t)(0xffffff - (match_set->set_priority & 0xffffff)) << 32) |
+ (0xffffffff - ++db->cfn_priority_counter);
+
+ int db_cfn_idx = -1;
+
+ struct {
+ uint64_t priority;
+ uint32_t idx;
+ } sorted_priority[db->nb_cat];
+
+ memset(sorted_priority, 0x0, sizeof(sorted_priority));
+
+ uint32_t in_use_count = 0;
+
+ for (uint32_t i = 1; i < db->nb_cat; ++i) {
+ if (db->cfn[i].ref > 0) {
+ sorted_priority[db->cfn[i].cfn_hw].priority = db->cfn[i].priority;
+ sorted_priority[db->cfn[i].cfn_hw].idx = i;
+ in_use_count += 1;
+
+ } else if (db_cfn_idx == -1) {
+ db_cfn_idx = (int)i;
+ }
+ }
+
+ if (in_use_count >= db->nb_cat - 1)
+ return -1;
+
+ if (in_use_count == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = 1;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ int goal = 1;
+ int free_before = -1000000;
+ int free_after = 1000000;
+ int found_smaller = 0;
+
+ for (int i = 1; i < (int)db->nb_cat; ++i) {
+ if (sorted_priority[i].priority > priority) { /* Bigger */
+ goal = i + 1;
+
+ } else if (sorted_priority[i].priority == 0) { /* Not set */
+ if (found_smaller) {
+ if (free_after > i)
+ free_after = i;
+
+ } else {
+ free_before = i;
+ }
+
+ } else {/* Smaller */
+ found_smaller = 1;
+ }
+ }
+
+ int diff_before = goal - free_before - 1;
+ int diff_after = free_after - goal;
+
+ if (goal < (int)db->nb_cat && sorted_priority[goal].priority == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ if (diff_after <= diff_before) {
+ for (int i = free_after; i > goal; --i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i - 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+
+ } else {
+ goal -= 1;
+
+ for (int i = free_before; i < goal; ++i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i + 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+ }
+
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+
+ return db_cfn_idx;
+}
+
+static void hw_db_inline_free_prioritized_cfn(struct hw_db_inline_resource_db *db, int cfn_hw)
+{
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (db->cfn[i].cfn_hw == cfn_hw) {
+ memset(&db->cfn[i], 0x0, sizeof(struct hw_db_inline_resource_db_cfn));
+ break;
+ }
+ }
+}
+
+static void hw_db_inline_update_active_filters(struct flow_nic_dev *ndev, void *db_handle,
+ int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[group];
+ struct hw_db_inline_resource_db_flm_cfn_map *cell;
+
+ for (uint32_t match_set_idx = 0; match_set_idx < db->nb_cat; ++match_set_idx) {
+ for (uint32_t ft_idx = 0; ft_idx < db->nb_flm_ft; ++ft_idx) {
+ int active = flm_rcp->ft[ft_idx].ref > 0 &&
+ flm_rcp->match_set[match_set_idx].ref > 0;
+ cell = &flm_rcp->cfn_map[match_set_idx * db->nb_flm_ft + ft_idx];
+
+ if (active && cell->cfn_idx == 0) {
+ /* Setup filter */
+ cell->cfn_idx = hw_db_inline_alloc_prioritized_cfn(ndev, db,
+ flm_rcp->match_set[match_set_idx].idx);
+ hw_db_inline_filter_apply(ndev, db, db->cfn[cell->cfn_idx].cfn_hw,
+ flm_rcp->match_set[match_set_idx].idx,
+ flm_rcp->ft[ft_idx].idx,
+ group == 0
+ ? db->match_set[flm_rcp->match_set[match_set_idx]
+ .idx.ids]
+ .data.action_set
+ : flm_rcp->ft[ft_idx].data.action_set);
+ }
+
+ if (!active && cell->cfn_idx > 0) {
+ /* Teardown filter */
+ hw_db_inline_filter_clear(ndev, db, db->cfn[cell->cfn_idx].cfn_hw);
+ hw_db_inline_free_prioritized_cfn(db,
+ db->cfn[cell->cfn_idx].cfn_hw);
+ cell->cfn_idx = 0;
+ }
+ }
+ }
+}
+
+
+/******************************************************************************/
+/* Match set */
+/******************************************************************************/
+
+static int hw_db_inline_match_set_compare(const struct hw_db_inline_match_set_data *data1,
+ const struct hw_db_inline_match_set_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->km_ft.raw == data2->km_ft.raw && data1->jump == data2->jump;
+}
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_match_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_MATCH_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_MATCH_SET_NB; ++i) {
+ if (!found && db->match_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->match_set[i].ref > 0 &&
+ hw_db_inline_match_set_compare(data, &db->match_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_match_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ found = 0;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].ref <= 0) {
+ found = 1;
+ flm_rcp->match_set[i].ref = 1;
+ flm_rcp->match_set[i].idx.raw = idx.raw;
+ break;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->match_set[idx.ids].data, data, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 1;
+ db->match_set[idx.ids].set_priority = ++db->set_priority_counter;
+
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
+ return idx;
+}
+
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->match_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+ int jump;
+
+ if (idx.error)
+ return;
+
+ db->match_set[idx.ids].ref -= 1;
+
+ if (db->match_set[idx.ids].ref > 0)
+ return;
+
+ jump = db->match_set[idx.ids].data.jump;
+ flm_rcp = &db->flm[jump];
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].idx.raw == idx.raw) {
+ flm_rcp->match_set[i].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, jump);
+ memset(&flm_rcp->match_set[i], 0x0,
+ sizeof(struct hw_db_inline_resource_db_flm_match_set));
+ }
+ }
+
+ memset(&db->match_set[idx.ids].data, 0x0, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 0;
+}
+
+/******************************************************************************/
+/* Action set */
+/******************************************************************************/
+
+static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_data *data1,
+ const struct hw_db_inline_action_set_data *data2)
+{
+ if (data1->contains_jump)
+ return data2->contains_jump && data1->jump == data2->jump;
+
+ return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
+ data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
+ data1->hsh.raw == data2->hsh.raw;
+}
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_action_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_ACTION_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_ACTION_SET_NB; ++i) {
+ if (!found && db->action_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->action_set[i].ref > 0 &&
+ hw_db_inline_action_set_compare(data, &db->action_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_action_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->action_set[idx.ids].data, data, sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->action_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->action_set[idx.ids].ref -= 1;
+
+ if (db->action_set[idx.ids].ref <= 0) {
+ memset(&db->action_set[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1593,6 +2383,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
return idx;
}
@@ -1647,6 +2439,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->group);
+
return idx;
}
@@ -1677,6 +2471,7 @@ void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struc
return;
flm_rcp->ft[idx.id1].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, idx.id2);
memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 9820225ffa..33de674b72 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -131,6 +131,10 @@ struct hw_db_hsh_idx {
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
+
+ HW_DB_IDX_TYPE_MATCH_SET,
+ HW_DB_IDX_TYPE_ACTION_SET,
+
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
@@ -145,6 +149,17 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_HSH,
};
+/* Container types */
+struct hw_db_inline_match_set_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_km_ft km_ft;
+ struct hw_db_action_set_idx action_set;
+ int jump;
+
+ uint8_t priority;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -224,6 +239,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
@@ -262,6 +278,25 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data);
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data);
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+
+/**/
+
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_cot_data *data);
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0b8ac26b83..ac29c59f26 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2678,10 +2678,30 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup Action Set */
+ struct hw_db_inline_action_set_data action_set_data = {
+ .contains_jump = 0,
+ .cot = cot_idx,
+ .qsl = qsl_idx,
+ .slc_lr = slc_lr_idx,
+ .tpe = tpe_idx,
+ .hsh = hsh_idx,
+ };
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
+ local_idxs[(*local_idx_counter)++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 0,
.group = group,
+ .action_set = action_set_idx,
};
struct hw_db_flm_ft flm_ft_idx = empty_pattern
? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
@@ -2868,6 +2888,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
}
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &action_set_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup CAT */
struct hw_db_inline_cat_data cat_data = {
.vlan_mask = (0xf << fd->vlans) & 0xf,
@@ -2987,6 +3019,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
struct hw_db_inline_km_ft_data km_ft_data = {
.cat = cat_idx,
.km = km_idx,
+ .action_set = action_set_idx,
};
struct hw_db_km_ft km_ft_idx =
hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
@@ -3023,10 +3056,32 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup Match Set */
+ struct hw_db_inline_match_set_data match_set_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ .km_ft = km_ft_idx,
+ .action_set = action_set_idx,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .priority = attr->priority & 0xff,
+ };
+ struct hw_db_match_set_idx match_set_idx =
+ hw_db_inline_match_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &match_set_data);
+ fh->db_idxs[fh->db_idx_counter++] = match_set_idx.raw;
+
+ if (match_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Match Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 1,
.jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .action_set = action_set_idx,
+
};
struct hw_db_flm_ft flm_ft_idx =
hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 37/73] net/ntnic: add flow dump feature
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (35 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 38/73] net/ntnic: add flow flush Serhii Iliushyk
` (35 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add posibilyty to dump flow in human readable format
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 17 ++
.../profile_inline/flow_api_hw_db_inline.c | 264 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 3 +
.../profile_inline/flow_api_profile_inline.c | 81 ++++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 29 ++
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
8 files changed, 413 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index e52363f04e..155a9e1fd6 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -281,6 +281,8 @@ struct flow_handle {
struct flow_handle *next;
struct flow_handle *prev;
+ /* Flow specific pointer to application data stored during action creation. */
+ void *context;
void *user_data;
union {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 043e4244fc..7f1e311988 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1006,6 +1006,22 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
return 0;
}
+static int flow_dev_dump(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_dev_dump_profile_inline(dev, flow, caller_id, file, error);
+}
+
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf)
{
@@ -1031,6 +1047,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_dev_dump = flow_dev_dump,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 52f85b65af..b5fee67e67 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -372,6 +372,270 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ char str_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(str_buffer);
+
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_MATCH_SET: {
+ const struct hw_db_inline_match_set_data *data =
+ &db->match_set[idxs[i].ids].data;
+ fprintf(file, " MATCH_SET %d, priority %d\n", idxs[i].ids,
+ (int)data->priority);
+ fprintf(file, " CAT id %d, KM id %d, KM_FT id %d, ACTION_SET id %d\n",
+ data->cat.ids, data->km.id1, data->km_ft.id1,
+ data->action_set.ids);
+
+ if (data->jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_ACTION_SET: {
+ const struct hw_db_inline_action_set_data *data =
+ &db->action_set[idxs[i].ids].data;
+ fprintf(file, " ACTION_SET %d\n", idxs[i].ids);
+
+ if (data->contains_jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ else
+ fprintf(file,
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ data->cot.ids, data->qsl.ids, data->slc_lr.ids,
+ data->tpe.ids, data->hsh.ids);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_CAT: {
+ const struct hw_db_inline_cat_data *data = &db->cat[idxs[i].ids].data;
+ fprintf(file, " CAT %d\n", idxs[i].ids);
+ fprintf(file, " Port msk 0x%02x, VLAN msk 0x%02x\n",
+ (int)data->mac_port_mask, (int)data->vlan_mask);
+ fprintf(file,
+ " Proto msks: Frag 0x%02x, l2 0x%02x, l3 0x%02x, l4 0x%02x, l3t 0x%02x, l4t 0x%02x\n",
+ (int)data->ptc_mask_frag, (int)data->ptc_mask_l2,
+ (int)data->ptc_mask_l3, (int)data->ptc_mask_l4,
+ (int)data->ptc_mask_l3_tunnel, (int)data->ptc_mask_l4_tunnel);
+ fprintf(file, " IP protocol: pn %u pnt %u\n", data->ip_prot,
+ data->ip_prot_tunnel);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_QSL: {
+ const struct hw_db_inline_qsl_data *data = &db->qsl[idxs[i].ids].data;
+ fprintf(file, " QSL %d\n", idxs[i].ids);
+
+ if (data->discard) {
+ fprintf(file, " Discard\n");
+ break;
+ }
+
+ if (data->drop) {
+ fprintf(file, " Drop\n");
+ break;
+ }
+
+ fprintf(file, " Table size %d\n", data->table_size);
+
+ for (uint32_t i = 0;
+ i < data->table_size && i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ fprintf(file, " %u: Queue %d, TX port %d\n", i,
+ (data->table[i].queue_en ? (int)data->table[i].queue : -1),
+ (data->table[i].tx_port_en ? (int)data->table[i].tx_port
+ : -1));
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_COT: {
+ const struct hw_db_inline_cot_data *data = &db->cot[idxs[i].ids].data;
+ fprintf(file, " COT %d\n", idxs[i].ids);
+ fprintf(file, " Color contrib %d, frag rcp %d\n",
+ (int)data->matcher_color_contrib, (int)data->frag_rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_SLC_LR: {
+ const struct hw_db_inline_slc_lr_data *data =
+ &db->slc_lr[idxs[i].ids].data;
+ fprintf(file, " SLC_LR %d\n", idxs[i].ids);
+ fprintf(file, " Enable %u, dyn %u, ofs %u\n", data->head_slice_en,
+ data->head_slice_dyn, data->head_slice_ofs);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE: {
+ const struct hw_db_inline_tpe_data *data = &db->tpe[idxs[i].ids].data;
+ fprintf(file, " TPE %d\n", idxs[i].ids);
+ fprintf(file, " Insert len %u, new outer %u, calc eth %u\n",
+ data->insert_len, data->new_outer,
+ data->calc_eth_type_from_inner_ip);
+ fprintf(file, " TTL enable %u, dyn %u, ofs %u\n", data->ttl_en,
+ data->ttl_dyn, data->ttl_ofs);
+ fprintf(file,
+ " Len A enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_a_en, data->len_a_pos_dyn, data->len_a_pos_ofs,
+ data->len_a_add_dyn, data->len_a_add_ofs, data->len_a_sub_dyn);
+ fprintf(file,
+ " Len B enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_b_en, data->len_b_pos_dyn, data->len_b_pos_ofs,
+ data->len_b_add_dyn, data->len_b_add_ofs, data->len_b_sub_dyn);
+ fprintf(file,
+ " Len C enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_c_en, data->len_c_pos_dyn, data->len_c_pos_ofs,
+ data->len_c_add_dyn, data->len_c_add_ofs, data->len_c_sub_dyn);
+
+ for (uint32_t i = 0; i < 6; ++i)
+ if (data->writer[i].en)
+ fprintf(file,
+ " Writer %i: Reader %u, dyn %u, ofs %u, len %u\n",
+ i, data->writer[i].reader_select,
+ data->writer[i].dyn, data->writer[i].ofs,
+ data->writer[i].len);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE_EXT: {
+ const struct hw_db_inline_tpe_ext_data *data =
+ &db->tpe_ext[idxs[i].ids].data;
+ const int rpl_rpl_length = ((int)data->size + 15) / 16;
+ fprintf(file, " TPE_EXT %d\n", idxs[i].ids);
+ fprintf(file, " Encap data, size %u\n", data->size);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ fprintf(file, " ");
+
+ for (int n = 15; n >= 0; --n)
+ fprintf(file, " %02x%s", data->hdr8[i * 16 + n],
+ n == 8 ? " " : "");
+
+ fprintf(file, "\n");
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_RCP: {
+ const struct hw_db_inline_flm_rcp_data *data = &db->flm[idxs[i].id1].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " QW0 dyn %u, ofs %u, QW4 dyn %u, ofs %u\n",
+ data->qw0_dyn, data->qw0_ofs, data->qw4_dyn, data->qw4_ofs);
+ fprintf(file, " SW8 dyn %u, ofs %u, SW9 dyn %u, ofs %u\n",
+ data->sw8_dyn, data->sw8_ofs, data->sw9_dyn, data->sw9_ofs);
+ fprintf(file, " Outer prot %u, inner prot %u\n", data->outer_prot,
+ data->inner_prot);
+ fprintf(file, " Mask:\n");
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[0],
+ data->mask[1], data->mask[2], data->mask[3], data->mask[4]);
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[5],
+ data->mask[6], data->mask[7], data->mask[8], data->mask[9]);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_FT: {
+ const struct hw_db_inline_flm_ft_data *data =
+ &db->flm[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " FLM_FT %d\n", idxs[i].id1);
+
+ if (data->is_group_zero)
+ fprintf(file, " Jump to %d\n", data->jump);
+
+ else
+ fprintf(file, " Group %d\n", data->group);
+
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_RCP: {
+ const struct hw_db_inline_km_rcp_data *data = &db->km[idxs[i].id1].data;
+ fprintf(file, " KM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " HW id %u\n", data->rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_FT: {
+ const struct hw_db_inline_km_ft_data *data =
+ &db->km[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " KM_FT %d\n", idxs[i].id1);
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ fprintf(file, " KM_RCP id %d\n", data->km.ids);
+ fprintf(file, " CAT id %d\n", data->cat.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_HSH: {
+ const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
+ fprintf(file, " HSH %d\n", idxs[i].ids);
+
+ switch (data->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ fprintf(file, " Func: NTH10\n");
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ fprintf(file, " Func: Toeplitz\n");
+ fprintf(file, " Key:");
+
+ for (uint8_t i = 0; i < MAX_RSS_KEY_LEN; i++) {
+ if (i % 10 == 0)
+ fprintf(file, "\n ");
+
+ fprintf(file, " %02x", data->key[i]);
+ }
+
+ fprintf(file, "\n");
+ break;
+
+ default:
+ fprintf(file, " Func: %u\n", data->func);
+ }
+
+ fprintf(file, " Hash mask hex:\n");
+ fprintf(file, " %016lx\n", data->hash_mask);
+
+ /* convert hash mask to human readable RTE_ETH_RSS_* form if possible */
+ if (sprint_nt_rss_mask(str_buffer, rss_buffer_len, "\n ",
+ data->hash_mask) == 0) {
+ fprintf(file, " Hash mask flags:%s\n", str_buffer);
+ }
+
+ break;
+ }
+
+ default: {
+ fprintf(file, " Unknown item. Type %u\n", idxs[i].type);
+ break;
+ }
+ }
+ }
+}
+
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ fprintf(file, "CFN status:\n");
+
+ for (uint32_t id = 0; id < db->nb_cat; ++id)
+ if (db->cfn[id].cfn_hw)
+ fprintf(file, " ID %d, HW id %d, priority 0x%" PRIx64 "\n", (int)id,
+ db->cfn[id].cfn_hw, db->cfn[id].priority);
+}
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 33de674b72..a9d31c86ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -276,6 +276,9 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file);
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
/**/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index ac29c59f26..e47ef37c6b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4301,6 +4301,86 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
return res;
}
+static void dump_flm_data(const uint32_t *data, FILE *file)
+{
+ for (unsigned int i = 0; i < 10; ++i) {
+ fprintf(file, "%s%02X %02X %02X %02X%s", i % 2 ? "" : " ",
+ (data[i] >> 24) & 0xff, (data[i] >> 16) & 0xff, (data[i] >> 8) & 0xff,
+ data[i] & 0xff, i % 2 ? "\n" : " ");
+ }
+}
+
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ if (flow != NULL) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLM) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+
+ } else {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs, flow->db_idx_counter,
+ file);
+ }
+
+ } else {
+ int max_flm_count = 1000;
+
+ hw_db_inline_dump_cfn(dev->ndev, dev->ndev->hw_db_handle, file);
+
+ flow = dev->ndev->flow_base;
+
+ while (flow) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs,
+ flow->db_idx_counter, file);
+ }
+
+ flow = flow->next;
+ }
+
+ flow = dev->ndev->flow_base_flm;
+
+ while (flow && max_flm_count >= 0) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+ max_flm_count -= 1;
+ }
+
+ flow = flow->next;
+ }
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
static const struct profile_inline_ops ops = {
/*
@@ -4309,6 +4389,7 @@ static const struct profile_inline_ops ops = {
.done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
.initialize_flow_management_of_ndev_profile_inline =
initialize_flow_management_of_ndev_profile_inline,
+ .flow_dev_dump_profile_inline = flow_dev_dump_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e623bb2352..2c76a2c023 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index df391b6399..5505198148 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,9 +569,38 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: flow_filter module uninitialized", __func__);
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_dev_dump(internals->flw_dev,
+ is_flow_handle_typecast(flow) ? (void *)flow
+ : flow->flw_hdl,
+ caller_id, file, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .dev_dump = eth_flow_dev_dump,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 27d6cbef01..cef655c5e0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,12 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -284,6 +290,11 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ int (*flow_dev_dump)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
/*
* NT Flow API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 38/73] net/ntnic: add flow flush
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (36 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
` (34 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Implements flow flush API
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 13 ++++++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 4 ++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 38 ++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +++
5 files changed, 105 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 7f1e311988..34f2cad2cd 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -253,6 +253,18 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
+static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
+}
+
/*
* Device Management API
*/
@@ -1047,6 +1059,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e47ef37c6b..1dfd96eaac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3636,6 +3636,48 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ /*
+ * Delete all created FLM flows from this eth device.
+ * FLM flows must be deleted first because normal flows are their parents.
+ */
+ struct flow_handle *flow = dev->ndev->flow_base_flm;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ /* Delete all created flows from this eth device */
+ flow = dev->ndev->flow_base;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ return err;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -4396,6 +4438,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
* NT Flow FLM Meter API
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 2c76a2c023..c695842077 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,10 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 5505198148..87b26bd315 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -569,6 +569,43 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ int res = 0;
+ /* Main application caller_id is port_id shifted above VDPA ports */
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (internals->flw_dev) {
+ res = flow_filter_ops->flow_flush(internals->flw_dev, caller_id, &flow_error);
+ rte_spinlock_lock(&flow_lock);
+
+ for (int flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used && nt_flows[flow].caller_id == caller_id) {
+ /* Cleanup recorded flows */
+ nt_flows[flow].used = 0;
+ nt_flows[flow].caller_id = 0;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -600,6 +637,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index cef655c5e0..12baa13800 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,10 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_flush_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -309,6 +313,9 @@ struct flow_filter_ops {
int (*flow_destroy)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 39/73] net/ntnic: add GMF (Generic MAC Feeder) module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (37 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 38/73] net/ntnic: add flow flush Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
` (33 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
The Generic MAC Feeder module provides a way to feed data
to the MAC modules directly from the FPGA,
rather than from host or physical ports.
The use case for this is as a test tool and is not used by NTNIC.
This module is requireqd for correct initialization
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 ++
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +++++++++
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 ++++++++++++++++++
5 files changed, 207 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
diff --git a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
index 8964458b47..d8e0cad7cd 100644
--- a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
+++ b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
@@ -404,6 +404,14 @@ static int _port_init(adapter_info_t *drv, nthw_fpga_t *fpga, int port)
_enable_tx(drv, mac_pcs);
_reset_rx(drv, mac_pcs);
+ /* 2.2) Nt4gaPort::setup() */
+ if (nthw_gmf_init(NULL, fpga, port) == 0) {
+ nthw_gmf_t gmf;
+
+ if (nthw_gmf_init(&gmf, fpga, port) == 0)
+ nthw_gmf_set_enable(&gmf, true);
+ }
+
/* Phase 3. Link state machine steps */
/* 3.1) Create NIM, ::createNim() */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d7e6d05556..92167d24e4 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -38,6 +38,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst9563.c',
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
+ 'nthw/core/nthw_gmf.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_core.h b/drivers/net/ntnic/nthw/core/include/nthw_core.h
index fe32891712..4073f9632c 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_core.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_core.h
@@ -17,6 +17,7 @@
#include "nthw_iic.h"
#include "nthw_i2cm.h"
+#include "nthw_gmf.h"
#include "nthw_gpio_phy.h"
#include "nthw_mac_pcs.h"
#include "nthw_sdc.h"
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_gmf.h b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
new file mode 100644
index 0000000000..cc5be85154
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_GMF_H__
+#define __NTHW_GMF_H__
+
+struct nthw_gmf {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_gmf;
+ int mn_instance;
+
+ nthw_register_t *mp_ctrl;
+ nthw_field_t *mp_ctrl_enable;
+ nthw_field_t *mp_ctrl_ifg_enable;
+ nthw_field_t *mp_ctrl_ifg_tx_now_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock;
+ nthw_field_t *mp_ctrl_ifg_auto_adjust_enable;
+ nthw_field_t *mp_ctrl_ts_inject_always;
+ nthw_field_t *mp_ctrl_fcs_always;
+
+ nthw_register_t *mp_speed;
+ nthw_field_t *mp_speed_ifg_speed;
+
+ nthw_register_t *mp_ifg_clock_delta;
+ nthw_field_t *mp_ifg_clock_delta_delta;
+
+ nthw_register_t *mp_ifg_clock_delta_adjust;
+ nthw_field_t *mp_ifg_clock_delta_adjust_delta;
+
+ nthw_register_t *mp_ifg_max_adjust_slack;
+ nthw_field_t *mp_ifg_max_adjust_slack_slack;
+
+ nthw_register_t *mp_debug_lane_marker;
+ nthw_field_t *mp_debug_lane_marker_compensation;
+
+ nthw_register_t *mp_stat_sticky;
+ nthw_field_t *mp_stat_sticky_data_underflowed;
+ nthw_field_t *mp_stat_sticky_ifg_adjusted;
+
+ nthw_register_t *mp_stat_next_pkt;
+ nthw_field_t *mp_stat_next_pkt_ns;
+
+ nthw_register_t *mp_stat_max_delayed_pkt;
+ nthw_field_t *mp_stat_max_delayed_pkt_ns;
+
+ nthw_register_t *mp_ts_inject;
+ nthw_field_t *mp_ts_inject_offset;
+ nthw_field_t *mp_ts_inject_pos;
+ int mn_param_gmf_ifg_speed_mul;
+ int mn_param_gmf_ifg_speed_div;
+
+ bool m_administrative_block; /* Used to enforce license expiry */
+};
+
+typedef struct nthw_gmf nthw_gmf_t;
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable);
+
+#endif /* __NTHW_GMF_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_gmf.c b/drivers/net/ntnic/nthw/core/nthw_gmf.c
new file mode 100644
index 0000000000..16a4c288bd
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_gmf.c
@@ -0,0 +1,133 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <limits.h>
+#include <math.h>
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_gmf.h"
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_GMF, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: GMF %d: no such instance",
+ p_fpga->p_fpga_info->mp_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_gmf = mod;
+
+ p->mp_ctrl = nthw_module_get_register(p->mp_mod_gmf, GMF_CTRL);
+ p->mp_ctrl_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_ENABLE);
+ p->mp_ctrl_ifg_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_ENABLE);
+ p->mp_ctrl_ifg_auto_adjust_enable =
+ nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_AUTO_ADJUST_ENABLE);
+ p->mp_ctrl_ts_inject_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_TS_INJECT_ALWAYS);
+ p->mp_ctrl_fcs_always = nthw_register_query_field(p->mp_ctrl, GMF_CTRL_FCS_ALWAYS);
+
+ p->mp_speed = nthw_module_get_register(p->mp_mod_gmf, GMF_SPEED);
+ p->mp_speed_ifg_speed = nthw_register_get_field(p->mp_speed, GMF_SPEED_IFG_SPEED);
+
+ p->mp_ifg_clock_delta = nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA);
+ p->mp_ifg_clock_delta_delta =
+ nthw_register_get_field(p->mp_ifg_clock_delta, GMF_IFG_SET_CLOCK_DELTA_DELTA);
+
+ p->mp_ifg_max_adjust_slack =
+ nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_MAX_ADJUST_SLACK);
+ p->mp_ifg_max_adjust_slack_slack = nthw_register_get_field(p->mp_ifg_max_adjust_slack,
+ GMF_IFG_MAX_ADJUST_SLACK_SLACK);
+
+ p->mp_debug_lane_marker = nthw_module_get_register(p->mp_mod_gmf, GMF_DEBUG_LANE_MARKER);
+ p->mp_debug_lane_marker_compensation =
+ nthw_register_get_field(p->mp_debug_lane_marker,
+ GMF_DEBUG_LANE_MARKER_COMPENSATION);
+
+ p->mp_stat_sticky = nthw_module_get_register(p->mp_mod_gmf, GMF_STAT_STICKY);
+ p->mp_stat_sticky_data_underflowed =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_DATA_UNDERFLOWED);
+ p->mp_stat_sticky_ifg_adjusted =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_IFG_ADJUSTED);
+
+ p->mn_param_gmf_ifg_speed_mul =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_MUL, 1);
+ p->mn_param_gmf_ifg_speed_div =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_DIV, 1);
+
+ p->m_administrative_block = false;
+
+ p->mp_stat_next_pkt = nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_NEXT_PKT);
+
+ if (p->mp_stat_next_pkt) {
+ p->mp_stat_next_pkt_ns =
+ nthw_register_query_field(p->mp_stat_next_pkt, GMF_STAT_NEXT_PKT_NS);
+
+ } else {
+ p->mp_stat_next_pkt_ns = NULL;
+ }
+
+ p->mp_stat_max_delayed_pkt =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_MAX_DELAYED_PKT);
+
+ if (p->mp_stat_max_delayed_pkt) {
+ p->mp_stat_max_delayed_pkt_ns =
+ nthw_register_query_field(p->mp_stat_max_delayed_pkt,
+ GMF_STAT_MAX_DELAYED_PKT_NS);
+
+ } else {
+ p->mp_stat_max_delayed_pkt_ns = NULL;
+ }
+
+ p->mp_ctrl_ifg_tx_now_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_NOW_ALWAYS);
+ p->mp_ctrl_ifg_tx_on_ts_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ALWAYS);
+
+ p->mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ADJUST_ON_SET_CLOCK);
+
+ p->mp_ifg_clock_delta_adjust =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA_ADJUST);
+
+ if (p->mp_ifg_clock_delta_adjust) {
+ p->mp_ifg_clock_delta_adjust_delta =
+ nthw_register_query_field(p->mp_ifg_clock_delta_adjust,
+ GMF_IFG_SET_CLOCK_DELTA_ADJUST_DELTA);
+
+ } else {
+ p->mp_ifg_clock_delta_adjust_delta = NULL;
+ }
+
+ p->mp_ts_inject = nthw_module_query_register(p->mp_mod_gmf, GMF_TS_INJECT);
+
+ if (p->mp_ts_inject) {
+ p->mp_ts_inject_offset =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_OFFSET);
+ p->mp_ts_inject_pos =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_POS);
+
+ } else {
+ p->mp_ts_inject_offset = NULL;
+ p->mp_ts_inject_pos = NULL;
+ }
+
+ return 0;
+}
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable)
+{
+ if (!p->m_administrative_block)
+ nthw_field_set_val_flush32(p->mp_ctrl_enable, enable ? 1 : 0);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 40/73] net/ntnic: sort FPGA registers alphanumerically
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (38 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
` (32 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Beatification commit. It is required for pretty supporting different FPGA
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 364 +++++++++---------
1 file changed, 182 insertions(+), 182 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 6df7208649..e076697a92 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,187 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
+ { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
+ { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
+ { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
+ { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
+ { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
+ { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
+ { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
+ { DBS_RX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
+ { DBS_RX_INIT_BUSY, 1, 8, 0 },
+ { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
+ { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
+ { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
+ { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
+ { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
+ { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
+ { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
+ { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
+ { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
+ { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
+ { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
+ { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
+ { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
+ { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
+ { DBS_TX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
+ { DBS_TX_INIT_BUSY, 1, 8, 0 },
+ { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
+ { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
+ { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
+ { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
+ { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
+ { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
+ { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
+ { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
+ { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
+ { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
+ { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
+ { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
+ { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
+ { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
+ { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_register_init_s dbs_registers[] = {
+ { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
+ { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
+ { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
+ { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
+ { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
+ { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
+ { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
+ { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
+ { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
+ { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
+ { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
+ { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
+ { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
+ { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
+ { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
+ { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
+ { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
+ { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
+ { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
+ { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
+ { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
+ { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
+ { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
+ { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
+ { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
+ { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
+ { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1541,192 +1722,11 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
-static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
- { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
- { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
- { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
- { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
- { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
- { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
- { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
- { DBS_RX_IDLE_BUSY, 1, 8, 0 },
- { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
- { DBS_RX_INIT_BUSY, 1, 8, 0 },
- { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
- { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
- { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
- { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
- { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
- { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
- { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
- { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
- { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
- { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
- { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
- { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
- { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
- { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
- { DBS_TX_IDLE_BUSY, 1, 8, 0 },
- { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
- { DBS_TX_INIT_BUSY, 1, 8, 0 },
- { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
- { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
- { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
- { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
- { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
- { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
- { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
- { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
- { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
- { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
- { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
- { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
- { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
- { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
- { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_register_init_s dbs_registers[] = {
- { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
- { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
- { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
- { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
- { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
- { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
- { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
- { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
- { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
- { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
- { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
- { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
- { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
- { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
- { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
- { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
- { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
- { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
- { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
- { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
- { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
- { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
- { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
- { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
- { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
- { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
- { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
-};
-
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
- { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers},
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
{
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 41/73] net/ntnic: add MOD CSU
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (39 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
` (31 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Checksum Update module updates the checksums of packets
that has been modified in any way.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index e076697a92..efa7b306bc 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,23 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
+ { CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s csu_rcp_data_fields[] = {
+ { CSU_RCP_DATA_IL3_CMD, 2, 5, 0x0000 },
+ { CSU_RCP_DATA_IL4_CMD, 3, 7, 0x0000 },
+ { CSU_RCP_DATA_OL3_CMD, 2, 0, 0x0000 },
+ { CSU_RCP_DATA_OL4_CMD, 3, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s csu_registers[] = {
+ { CSU_RCP_CTRL, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, csu_rcp_ctrl_fields },
+ { CSU_RCP_DATA, 2, 10, NTHW_FPGA_REG_TYPE_WO, 0, 4, csu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
{ DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
{ DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
@@ -1724,6 +1741,7 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
@@ -1919,5 +1937,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 22, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 42/73] net/ntnic: add MOD FLM
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (40 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 43/73] net/ntnic: add HFU module Serhii Iliushyk
` (30 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup and
programming engine which supported exact match lookup in line-rate
of up to hundreds of millions of flows.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 286 +++++++++++++++++-
1 file changed, 284 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efa7b306bc..739cabfb1c 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -468,6 +468,288 @@ static nthw_fpga_register_init_s dbs_registers[] = {
{ DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
};
+static nthw_fpga_field_init_s flm_buf_ctrl_fields[] = {
+ { FLM_BUF_CTRL_INF_AVAIL, 16, 16, 0x0000 },
+ { FLM_BUF_CTRL_LRN_FREE, 16, 0, 0x0000 },
+ { FLM_BUF_CTRL_STA_AVAIL, 16, 32, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_control_fields[] = {
+ { FLM_CONTROL_CALIB_RECALIBRATE, 3, 28, 0 },
+ { FLM_CONTROL_CRCRD, 1, 12, 0x0000 },
+ { FLM_CONTROL_CRCWR, 1, 11, 0x0000 },
+ { FLM_CONTROL_EAB, 5, 18, 0 },
+ { FLM_CONTROL_ENABLE, 1, 0, 0 },
+ { FLM_CONTROL_INIT, 1, 1, 0x0000 },
+ { FLM_CONTROL_LDS, 1, 2, 0x0000 },
+ { FLM_CONTROL_LFS, 1, 3, 0x0000 },
+ { FLM_CONTROL_LIS, 1, 4, 0x0000 },
+ { FLM_CONTROL_PDS, 1, 9, 0x0000 },
+ { FLM_CONTROL_PIS, 1, 10, 0x0000 },
+ { FLM_CONTROL_RBL, 4, 13, 0 },
+ { FLM_CONTROL_RDS, 1, 7, 0x0000 },
+ { FLM_CONTROL_RIS, 1, 8, 0x0000 },
+ { FLM_CONTROL_SPLIT_SDRAM_USAGE, 5, 23, 16 },
+ { FLM_CONTROL_UDS, 1, 5, 0x0000 },
+ { FLM_CONTROL_UIS, 1, 6, 0x0000 },
+ { FLM_CONTROL_WPD, 1, 17, 0 },
+};
+
+static nthw_fpga_field_init_s flm_inf_data_fields[] = {
+ { FLM_INF_DATA_BYTES, 64, 0, 0x0000 }, { FLM_INF_DATA_CAUSE, 3, 224, 0x0000 },
+ { FLM_INF_DATA_EOR, 1, 287, 0x0000 }, { FLM_INF_DATA_ID, 32, 192, 0x0000 },
+ { FLM_INF_DATA_PACKETS, 64, 64, 0x0000 }, { FLM_INF_DATA_TS, 64, 128, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_aps_fields[] = {
+ { FLM_LOAD_APS_APS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_bin_fields[] = {
+ { FLM_LOAD_BIN_BIN, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_lps_fields[] = {
+ { FLM_LOAD_LPS_LPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
+ { FLM_LRN_DATA_ADJ, 32, 480, 0x0000 }, { FLM_LRN_DATA_COLOR, 32, 448, 0x0000 },
+ { FLM_LRN_DATA_DSCP, 6, 698, 0x0000 }, { FLM_LRN_DATA_ENT, 1, 693, 0x0000 },
+ { FLM_LRN_DATA_EOR, 1, 767, 0x0000 }, { FLM_LRN_DATA_FILL, 16, 544, 0x0000 },
+ { FLM_LRN_DATA_FT, 4, 560, 0x0000 }, { FLM_LRN_DATA_FT_MBR, 4, 564, 0x0000 },
+ { FLM_LRN_DATA_FT_MISS, 4, 568, 0x0000 }, { FLM_LRN_DATA_ID, 32, 512, 0x0000 },
+ { FLM_LRN_DATA_KID, 8, 328, 0x0000 }, { FLM_LRN_DATA_MBR_ID1, 28, 572, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID2, 28, 600, 0x0000 }, { FLM_LRN_DATA_MBR_ID3, 28, 628, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID4, 28, 656, 0x0000 }, { FLM_LRN_DATA_NAT_EN, 1, 711, 0x0000 },
+ { FLM_LRN_DATA_NAT_IP, 32, 336, 0x0000 }, { FLM_LRN_DATA_NAT_PORT, 16, 400, 0x0000 },
+ { FLM_LRN_DATA_NOFI, 1, 716, 0x0000 }, { FLM_LRN_DATA_OP, 4, 694, 0x0000 },
+ { FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
+ { FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
+ { FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
+ { FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
+ { FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_prio_fields[] = {
+ { FLM_PRIO_FT0, 4, 4, 1 }, { FLM_PRIO_FT1, 4, 12, 1 }, { FLM_PRIO_FT2, 4, 20, 1 },
+ { FLM_PRIO_FT3, 4, 28, 1 }, { FLM_PRIO_LIMIT0, 4, 0, 0 }, { FLM_PRIO_LIMIT1, 4, 8, 0 },
+ { FLM_PRIO_LIMIT2, 4, 16, 0 }, { FLM_PRIO_LIMIT3, 4, 24, 0 },
+};
+
+static nthw_fpga_field_init_s flm_pst_ctrl_fields[] = {
+ { FLM_PST_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_PST_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_pst_data_fields[] = {
+ { FLM_PST_DATA_BP, 5, 0, 0x0000 },
+ { FLM_PST_DATA_PP, 5, 5, 0x0000 },
+ { FLM_PST_DATA_TP, 5, 10, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_ctrl_fields[] = {
+ { FLM_RCP_CTRL_ADR, 5, 0, 0x0000 },
+ { FLM_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_data_fields[] = {
+ { FLM_RCP_DATA_AUTO_IPV4_MASK, 1, 402, 0x0000 },
+ { FLM_RCP_DATA_BYT_DYN, 5, 387, 0x0000 },
+ { FLM_RCP_DATA_BYT_OFS, 8, 392, 0x0000 },
+ { FLM_RCP_DATA_IPN, 1, 386, 0x0000 },
+ { FLM_RCP_DATA_KID, 8, 377, 0x0000 },
+ { FLM_RCP_DATA_LOOKUP, 1, 0, 0x0000 },
+ { FLM_RCP_DATA_MASK, 320, 57, 0x0000 },
+ { FLM_RCP_DATA_OPN, 1, 385, 0x0000 },
+ { FLM_RCP_DATA_QW0_DYN, 5, 1, 0x0000 },
+ { FLM_RCP_DATA_QW0_OFS, 8, 6, 0x0000 },
+ { FLM_RCP_DATA_QW0_SEL, 2, 14, 0x0000 },
+ { FLM_RCP_DATA_QW4_DYN, 5, 16, 0x0000 },
+ { FLM_RCP_DATA_QW4_OFS, 8, 21, 0x0000 },
+ { FLM_RCP_DATA_SW8_DYN, 5, 29, 0x0000 },
+ { FLM_RCP_DATA_SW8_OFS, 8, 34, 0x0000 },
+ { FLM_RCP_DATA_SW8_SEL, 2, 42, 0x0000 },
+ { FLM_RCP_DATA_SW9_DYN, 5, 44, 0x0000 },
+ { FLM_RCP_DATA_SW9_OFS, 8, 49, 0x0000 },
+ { FLM_RCP_DATA_TXPLM, 2, 400, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scan_fields[] = {
+ { FLM_SCAN_I, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s flm_status_fields[] = {
+ { FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
+ { FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
+ { FLM_STATUS_CALIB_SUCCESS, 3, 0, 0 },
+ { FLM_STATUS_CRCERR, 1, 10, 0x0000 },
+ { FLM_STATUS_CRITICAL, 1, 8, 0x0000 },
+ { FLM_STATUS_EFT_BP, 1, 11, 0x0000 },
+ { FLM_STATUS_IDLE, 1, 7, 0x0000 },
+ { FLM_STATUS_INITDONE, 1, 6, 0x0000 },
+ { FLM_STATUS_PANIC, 1, 9, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_done_fields[] = {
+ { FLM_STAT_AUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_fail_fields[] = {
+ { FLM_STAT_AUL_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_ignore_fields[] = {
+ { FLM_STAT_AUL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_hit_fields[] = {
+ { FLM_STAT_CSH_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_miss_fields[] = {
+ { FLM_STAT_CSH_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_unh_fields[] = {
+ { FLM_STAT_CSH_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_move_fields[] = {
+ { FLM_STAT_CUC_MOVE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_start_fields[] = {
+ { FLM_STAT_CUC_START_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_flows_fields[] = {
+ { FLM_STAT_FLOWS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_done_fields[] = {
+ { FLM_STAT_INF_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_skip_fields[] = {
+ { FLM_STAT_INF_SKIP_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_done_fields[] = {
+ { FLM_STAT_LRN_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_fail_fields[] = {
+ { FLM_STAT_LRN_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_ignore_fields[] = {
+ { FLM_STAT_LRN_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_dis_fields[] = {
+ { FLM_STAT_PCK_DIS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_hit_fields[] = {
+ { FLM_STAT_PCK_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_miss_fields[] = {
+ { FLM_STAT_PCK_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_unh_fields[] = {
+ { FLM_STAT_PCK_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_done_fields[] = {
+ { FLM_STAT_PRB_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_ignore_fields[] = {
+ { FLM_STAT_PRB_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_done_fields[] = {
+ { FLM_STAT_REL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_ignore_fields[] = {
+ { FLM_STAT_REL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_sta_done_fields[] = {
+ { FLM_STAT_STA_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_tul_done_fields[] = {
+ { FLM_STAT_TUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_done_fields[] = {
+ { FLM_STAT_UNL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_ignore_fields[] = {
+ { FLM_STAT_UNL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_sta_data_fields[] = {
+ { FLM_STA_DATA_EOR, 1, 95, 0x0000 }, { FLM_STA_DATA_ID, 32, 0, 0x0000 },
+ { FLM_STA_DATA_LDS, 1, 32, 0x0000 }, { FLM_STA_DATA_LFS, 1, 33, 0x0000 },
+ { FLM_STA_DATA_LIS, 1, 34, 0x0000 }, { FLM_STA_DATA_PDS, 1, 39, 0x0000 },
+ { FLM_STA_DATA_PIS, 1, 40, 0x0000 }, { FLM_STA_DATA_RDS, 1, 37, 0x0000 },
+ { FLM_STA_DATA_RIS, 1, 38, 0x0000 }, { FLM_STA_DATA_UDS, 1, 35, 0x0000 },
+ { FLM_STA_DATA_UIS, 1, 36, 0x0000 },
+};
+
+static nthw_fpga_register_init_s flm_registers[] = {
+ { FLM_BUF_CTRL, 14, 48, NTHW_FPGA_REG_TYPE_RW, 0, 3, flm_buf_ctrl_fields },
+ { FLM_CONTROL, 0, 31, NTHW_FPGA_REG_TYPE_MIXED, 134217728, 18, flm_control_fields },
+ { FLM_INF_DATA, 16, 288, NTHW_FPGA_REG_TYPE_RO, 0, 6, flm_inf_data_fields },
+ { FLM_LOAD_APS, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_aps_fields },
+ { FLM_LOAD_BIN, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_load_bin_fields },
+ { FLM_LOAD_LPS, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_lps_fields },
+ { FLM_LRN_DATA, 15, 768, NTHW_FPGA_REG_TYPE_WO, 0, 34, flm_lrn_data_fields },
+ { FLM_PRIO, 6, 32, NTHW_FPGA_REG_TYPE_WO, 269488144, 8, flm_prio_fields },
+ { FLM_PST_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_pst_ctrl_fields },
+ { FLM_PST_DATA, 13, 15, NTHW_FPGA_REG_TYPE_WO, 0, 3, flm_pst_data_fields },
+ { FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
+ { FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
+ { FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
+ { FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
+ { FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
+ { FLM_STAT_AUL_IGNORE, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_ignore_fields },
+ { FLM_STAT_CSH_HIT, 52, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_hit_fields },
+ { FLM_STAT_CSH_MISS, 53, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_miss_fields },
+ { FLM_STAT_CSH_UNH, 54, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_unh_fields },
+ { FLM_STAT_CUC_MOVE, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_move_fields },
+ { FLM_STAT_CUC_START, 55, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_start_fields },
+ { FLM_STAT_FLOWS, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_flows_fields },
+ { FLM_STAT_INF_DONE, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_done_fields },
+ { FLM_STAT_INF_SKIP, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_skip_fields },
+ { FLM_STAT_LRN_DONE, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_done_fields },
+ { FLM_STAT_LRN_FAIL, 34, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_fail_fields },
+ { FLM_STAT_LRN_IGNORE, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_ignore_fields },
+ { FLM_STAT_PCK_DIS, 51, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_dis_fields },
+ { FLM_STAT_PCK_HIT, 48, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_hit_fields },
+ { FLM_STAT_PCK_MISS, 49, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_miss_fields },
+ { FLM_STAT_PCK_UNH, 50, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_unh_fields },
+ { FLM_STAT_PRB_DONE, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_done_fields },
+ { FLM_STAT_PRB_IGNORE, 40, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_ignore_fields },
+ { FLM_STAT_REL_DONE, 37, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_done_fields },
+ { FLM_STAT_REL_IGNORE, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_ignore_fields },
+ { FLM_STAT_STA_DONE, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_sta_done_fields },
+ { FLM_STAT_TUL_DONE, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_tul_done_fields },
+ { FLM_STAT_UNL_DONE, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_done_fields },
+ { FLM_STAT_UNL_IGNORE, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_ignore_fields },
+ { FLM_STA_DATA, 17, 96, NTHW_FPGA_REG_TYPE_RO, 0, 11, flm_sta_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1743,6 +2025,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
+ { MOD_FLM, 0, MOD_FLM, 0, 25, NTHW_FPGA_BUS_TYPE_RAB1, 1280, 43, flm_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
@@ -1817,7 +2100,6 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
- { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
@@ -1937,5 +2219,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 43/73] net/ntnic: add HFU module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (41 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 44/73] net/ntnic: add IFR module Serhii Iliushyk
` (29 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Header Field Update module updates protocol fields
if the packets have been changed,
for example length fields and next protocol fields.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 739cabfb1c..82068746b3 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -919,6 +919,41 @@ static nthw_fpga_register_init_s gpio_phy_registers[] = {
{ GPIO_PHY_GPIO, 1, 10, NTHW_FPGA_REG_TYPE_RW, 17, 10, gpio_phy_gpio_fields },
};
+static nthw_fpga_field_init_s hfu_rcp_ctrl_fields[] = {
+ { HFU_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { HFU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s hfu_rcp_data_fields[] = {
+ { HFU_RCP_DATA_LEN_A_ADD_DYN, 5, 15, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_ADD_OFS, 8, 20, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_OL4LEN, 1, 1, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_DYN, 5, 2, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_OFS, 8, 7, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_SUB_DYN, 5, 28, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_WR, 1, 0, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_DYN, 5, 47, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_OFS, 8, 52, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_DYN, 5, 34, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_OFS, 8, 39, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_SUB_DYN, 5, 60, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_WR, 1, 33, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_DYN, 5, 79, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_OFS, 8, 84, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_DYN, 5, 66, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_OFS, 8, 71, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_SUB_DYN, 5, 92, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_WR, 1, 65, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_DYN, 5, 98, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_OFS, 8, 103, 0x0000 },
+ { HFU_RCP_DATA_TTL_WR, 1, 97, 0x0000 },
+};
+
+static nthw_fpga_register_init_s hfu_registers[] = {
+ { HFU_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, hfu_rcp_ctrl_fields },
+ { HFU_RCP_DATA, 1, 111, NTHW_FPGA_REG_TYPE_WO, 0, 22, hfu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s hif_build_time_fields[] = {
{ HIF_BUILD_TIME_TIME, 32, 0, 1726740521 },
};
@@ -2033,6 +2068,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
gpio_phy_registers
},
+ { MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
@@ -2219,5 +2255,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 44/73] net/ntnic: add IFR module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (42 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 43/73] net/ntnic: add HFU module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
` (28 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 82068746b3..509e1f6860 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1095,6 +1095,44 @@ static nthw_fpga_register_init_s hsh_registers[] = {
{ HSH_RCP_DATA, 1, 743, NTHW_FPGA_REG_TYPE_WO, 0, 23, hsh_rcp_data_fields },
};
+static nthw_fpga_field_init_s ifr_counters_ctrl_fields[] = {
+ { IFR_COUNTERS_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_COUNTERS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_counters_data_fields[] = {
+ { IFR_COUNTERS_DATA_DROP, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_ctrl_fields[] = {
+ { IFR_DF_BUF_CTRL_AVAILABLE, 11, 0, 0x0000 },
+ { IFR_DF_BUF_CTRL_MTU_PROFILE, 16, 11, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_data_fields[] = {
+ { IFR_DF_BUF_DATA_FIFO_DAT, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_ctrl_fields[] = {
+ { IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_data_fields[] = {
+ { IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 }, { IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 }, { IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ifr_registers[] = {
+ { IFR_COUNTERS_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_counters_ctrl_fields },
+ { IFR_COUNTERS_DATA, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_counters_data_fields },
+ { IFR_DF_BUF_CTRL, 2, 27, NTHW_FPGA_REG_TYPE_RO, 0, 2, ifr_df_buf_ctrl_fields },
+ { IFR_DF_BUF_DATA, 3, 128, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_df_buf_data_fields },
+ { IFR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_rcp_ctrl_fields },
+ { IFR_RCP_DATA, 1, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, ifr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s iic_adr_fields[] = {
{ IIC_ADR_SLV_ADR, 7, 1, 0 },
};
@@ -2071,6 +2109,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
+ { MOD_IFR, 0, MOD_IFR, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 9984, 6, ifr_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
{ MOD_IIC, 1, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 896, 22, iic_registers },
{ MOD_IIC, 2, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 24832, 22, iic_registers },
@@ -2255,5 +2294,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 45/73] net/ntnic: add MAC Rx module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (43 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 44/73] net/ntnic: add IFR module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
` (27 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 61 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +++++++++
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 509e1f6860..eecd6342c0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1774,6 +1774,63 @@ static nthw_fpga_register_init_s mac_pcs_registers[] = {
},
};
+static nthw_fpga_field_init_s mac_rx_bad_fcs_fields[] = {
+ { MAC_RX_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_fragment_fields[] = {
+ { MAC_RX_FRAGMENT_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_bad_fcs_fields[] = {
+ { MAC_RX_PACKET_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_small_fields[] = {
+ { MAC_RX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_bytes_fields[] = {
+ { MAC_RX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_bytes_fields[] = {
+ { MAC_RX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_packets_fields[] = {
+ { MAC_RX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_packets_fields[] = {
+ { MAC_RX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_undersize_fields[] = {
+ { MAC_RX_UNDERSIZE_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_rx_registers[] = {
+ { MAC_RX_BAD_FCS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_bad_fcs_fields },
+ { MAC_RX_FRAGMENT, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_fragment_fields },
+ {
+ MAC_RX_PACKET_BAD_FCS, 7, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_packet_bad_fcs_fields
+ },
+ { MAC_RX_PACKET_SMALL, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_packet_small_fields },
+ { MAC_RX_TOTAL_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_bytes_fields },
+ {
+ MAC_RX_TOTAL_GOOD_BYTES, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_bytes_fields
+ },
+ {
+ MAC_RX_TOTAL_GOOD_PACKETS, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_packets_fields
+ },
+ { MAC_RX_TOTAL_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_packets_fields },
+ { MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2123,6 +2180,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_MAC_PCS, 1, MOD_MAC_PCS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB2, 11776, 44,
mac_pcs_registers
},
+ { MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
+ { MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2294,5 +2353,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index b6be02f45e..5983ba7095 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -29,6 +29,7 @@
#define MOD_IIC (0x7629cddbUL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
+#define MOD_MAC_RX (0x6347b490UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -43,7 +44,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (14)
+#define MOD_IDX_COUNT (31)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 3560eeda7d..5ebbec6c7e 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -30,6 +30,7 @@
#include "nthw_fpga_reg_defs_ins.h"
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
+#include "nthw_fpga_reg_defs_mac_rx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
new file mode 100644
index 0000000000..3829c10f3b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_RX_
+#define _NTHW_FPGA_REG_DEFS_MAC_RX_
+
+/* MAC_RX */
+#define MAC_RX_BAD_FCS (0xca07f618UL)
+#define MAC_RX_BAD_FCS_COUNT (0x11d5ba0eUL)
+#define MAC_RX_FRAGMENT (0x5363b736UL)
+#define MAC_RX_FRAGMENT_COUNT (0xf664c9aUL)
+#define MAC_RX_PACKET_BAD_FCS (0x4cb8b34cUL)
+#define MAC_RX_PACKET_BAD_FCS_COUNT (0xb6701e28UL)
+#define MAC_RX_PACKET_SMALL (0xed318a65UL)
+#define MAC_RX_PACKET_SMALL_COUNT (0x72095ec7UL)
+#define MAC_RX_TOTAL_BYTES (0x831313e2UL)
+#define MAC_RX_TOTAL_BYTES_COUNT (0xe5d8be59UL)
+#define MAC_RX_TOTAL_GOOD_BYTES (0x912c2d1cUL)
+#define MAC_RX_TOTAL_GOOD_BYTES_COUNT (0x63bb5f3eUL)
+#define MAC_RX_TOTAL_GOOD_PACKETS (0xfbb4f497UL)
+#define MAC_RX_TOTAL_GOOD_PACKETS_COUNT (0xae9d21b0UL)
+#define MAC_RX_TOTAL_PACKETS (0xb0ea3730UL)
+#define MAC_RX_TOTAL_PACKETS_COUNT (0x532c885dUL)
+#define MAC_RX_UNDERSIZE (0xb6fa4bdbUL)
+#define MAC_RX_UNDERSIZE_COUNT (0x471945ffUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_RX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 46/73] net/ntnic: add MAC Tx module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (44 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
` (26 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Media Access Control Transmit module contains counters
that keep track on transmitted packets.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 ++++++++++
4 files changed, 61 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index eecd6342c0..7a2f5aec32 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1831,6 +1831,40 @@ static nthw_fpga_register_init_s mac_rx_registers[] = {
{ MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
};
+static nthw_fpga_field_init_s mac_tx_packet_small_fields[] = {
+ { MAC_TX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_bytes_fields[] = {
+ { MAC_TX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_bytes_fields[] = {
+ { MAC_TX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_packets_fields[] = {
+ { MAC_TX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_packets_fields[] = {
+ { MAC_TX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_tx_registers[] = {
+ { MAC_TX_PACKET_SMALL, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_packet_small_fields },
+ { MAC_TX_TOTAL_BYTES, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_bytes_fields },
+ {
+ MAC_TX_TOTAL_GOOD_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_bytes_fields
+ },
+ {
+ MAC_TX_TOTAL_GOOD_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_packets_fields
+ },
+ { MAC_TX_TOTAL_PACKETS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_packets_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2182,6 +2216,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
},
{ MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
{ MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
+ { MOD_MAC_TX, 0, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 11264, 5, mac_tx_registers },
+ { MOD_MAC_TX, 1, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12800, 5, mac_tx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2353,5 +2389,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 5983ba7095..f4a913f3d2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -30,6 +30,7 @@
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
+#define MOD_MAC_TX (0x351d1316UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -44,7 +45,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (31)
+#define MOD_IDX_COUNT (32)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 5ebbec6c7e..7741aa563f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -31,6 +31,7 @@
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
#include "nthw_fpga_reg_defs_mac_rx.h"
+#include "nthw_fpga_reg_defs_mac_tx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
new file mode 100644
index 0000000000..6a77d449ae
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_TX_
+#define _NTHW_FPGA_REG_DEFS_MAC_TX_
+
+/* MAC_TX */
+#define MAC_TX_PACKET_SMALL (0xcfcb5e97UL)
+#define MAC_TX_PACKET_SMALL_COUNT (0x84345b01UL)
+#define MAC_TX_TOTAL_BYTES (0x7bd15854UL)
+#define MAC_TX_TOTAL_BYTES_COUNT (0x61fb238cUL)
+#define MAC_TX_TOTAL_GOOD_BYTES (0xcf0260fUL)
+#define MAC_TX_TOTAL_GOOD_BYTES_COUNT (0x8603398UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS (0xd89f151UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS_COUNT (0x12c47c77UL)
+#define MAC_TX_TOTAL_PACKETS (0xe37b5ed4UL)
+#define MAC_TX_TOTAL_PACKETS_COUNT (0x21ddd2ddUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_TX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 47/73] net/ntnic: add RPP LR module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (45 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
` (25 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The RX Packet Process for Local Retransmit module can add bytes
in the FPGA TX pipeline, which is needed when the packet increases in size.
Note, this makes room for packet expansion,
but the actual expansion is done by the modules.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 32 ++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 7a2f5aec32..33437da204 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2138,6 +2138,35 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
+ { RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_data_fields[] = {
+ { RPP_LR_IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_ctrl_fields[] = {
+ { RPP_LR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_data_fields[] = {
+ { RPP_LR_RCP_DATA_EXP, 14, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpp_lr_registers[] = {
+ { RPP_LR_IFR_RCP_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_ifr_rcp_ctrl_fields },
+ { RPP_LR_IFR_RCP_DATA, 3, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, rpp_lr_ifr_rcp_data_fields },
+ { RPP_LR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_rcp_ctrl_fields },
+ { RPP_LR_RCP_DATA, 1, 14, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpp_lr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s rst9563_ctrl_fields[] = {
{ RST9563_CTRL_PTP_MMCM_CLKSEL, 1, 2, 1 },
{ RST9563_CTRL_TS_CLKSEL, 1, 1, 1 },
@@ -2230,6 +2259,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_QSL, 0, MOD_QSL, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 1792, 8, qsl_registers },
{ MOD_RAC, 0, MOD_RAC, 3, 0, NTHW_FPGA_BUS_TYPE_PCI, 8192, 14, rac_registers },
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
+ { MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
};
@@ -2389,5 +2419,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 48/73] net/ntnic: add MOD SLC LR
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (46 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
` (24 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 33437da204..0f69f89527 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2214,6 +2214,23 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
+static nthw_fpga_field_init_s slc_rcp_ctrl_fields[] = {
+ { SLC_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { SLC_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s slc_rcp_data_fields[] = {
+ { SLC_RCP_DATA_HEAD_DYN, 5, 1, 0x0000 }, { SLC_RCP_DATA_HEAD_OFS, 8, 6, 0x0000 },
+ { SLC_RCP_DATA_HEAD_SLC_EN, 1, 0, 0x0000 }, { SLC_RCP_DATA_PCAP, 1, 35, 0x0000 },
+ { SLC_RCP_DATA_TAIL_DYN, 5, 15, 0x0000 }, { SLC_RCP_DATA_TAIL_OFS, 15, 20, 0x0000 },
+ { SLC_RCP_DATA_TAIL_SLC_EN, 1, 14, 0x0000 },
+};
+
+static nthw_fpga_register_init_s slc_registers[] = {
+ { SLC_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, slc_rcp_ctrl_fields },
+ { SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2261,6 +2278,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
+ { MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2419,5 +2437,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index f4a913f3d2..865dd6a084 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,11 +41,12 @@
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
+#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (32)
+#define MOD_IDX_COUNT (33)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 49/73] net/ntnic: add Tx CPY module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (47 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
` (23 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Copy module writes data to packet fields based on the lookup
performed by the FLM module.
This is used for NAT and can support other actions based
on the RTE action MODIFY_FIELD.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 204 +++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 205 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 0f69f89527..60fd748ea2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,207 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s cpy_packet_reader0_ctrl_fields[] = {
+ { CPY_PACKET_READER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_PACKET_READER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_packet_reader0_data_fields[] = {
+ { CPY_PACKET_READER0_DATA_DYN, 5, 10, 0x0000 },
+ { CPY_PACKET_READER0_DATA_OFS, 10, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_ctrl_fields[] = {
+ { CPY_WRITER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_data_fields[] = {
+ { CPY_WRITER0_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER0_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER0_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER0_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER0_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_ctrl_fields[] = {
+ { CPY_WRITER0_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_data_fields[] = {
+ { CPY_WRITER0_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_ctrl_fields[] = {
+ { CPY_WRITER1_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_data_fields[] = {
+ { CPY_WRITER1_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER1_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER1_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER1_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER1_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_ctrl_fields[] = {
+ { CPY_WRITER1_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_data_fields[] = {
+ { CPY_WRITER1_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_ctrl_fields[] = {
+ { CPY_WRITER2_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_data_fields[] = {
+ { CPY_WRITER2_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER2_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER2_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER2_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER2_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_ctrl_fields[] = {
+ { CPY_WRITER2_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_data_fields[] = {
+ { CPY_WRITER2_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_ctrl_fields[] = {
+ { CPY_WRITER3_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_data_fields[] = {
+ { CPY_WRITER3_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER3_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER3_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER3_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER3_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_ctrl_fields[] = {
+ { CPY_WRITER3_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_data_fields[] = {
+ { CPY_WRITER3_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_ctrl_fields[] = {
+ { CPY_WRITER4_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_data_fields[] = {
+ { CPY_WRITER4_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER4_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER4_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER4_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER4_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_ctrl_fields[] = {
+ { CPY_WRITER4_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_data_fields[] = {
+ { CPY_WRITER4_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_ctrl_fields[] = {
+ { CPY_WRITER5_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_data_fields[] = {
+ { CPY_WRITER5_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER5_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER5_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER5_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER5_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_ctrl_fields[] = {
+ { CPY_WRITER5_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_data_fields[] = {
+ { CPY_WRITER5_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s cpy_registers[] = {
+ {
+ CPY_PACKET_READER0_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_ctrl_fields
+ },
+ {
+ CPY_PACKET_READER0_DATA, 25, 15, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_data_fields
+ },
+ { CPY_WRITER0_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer0_ctrl_fields },
+ { CPY_WRITER0_DATA, 1, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer0_data_fields },
+ {
+ CPY_WRITER0_MASK_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer0_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER0_MASK_DATA, 3, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer0_mask_data_fields
+ },
+ { CPY_WRITER1_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer1_ctrl_fields },
+ { CPY_WRITER1_DATA, 5, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer1_data_fields },
+ {
+ CPY_WRITER1_MASK_CTRL, 6, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer1_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER1_MASK_DATA, 7, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer1_mask_data_fields
+ },
+ { CPY_WRITER2_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer2_ctrl_fields },
+ { CPY_WRITER2_DATA, 9, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer2_data_fields },
+ {
+ CPY_WRITER2_MASK_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer2_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER2_MASK_DATA, 11, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer2_mask_data_fields
+ },
+ { CPY_WRITER3_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer3_ctrl_fields },
+ { CPY_WRITER3_DATA, 13, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer3_data_fields },
+ {
+ CPY_WRITER3_MASK_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer3_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER3_MASK_DATA, 15, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer3_mask_data_fields
+ },
+ { CPY_WRITER4_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer4_ctrl_fields },
+ { CPY_WRITER4_DATA, 17, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer4_data_fields },
+ {
+ CPY_WRITER4_MASK_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer4_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER4_MASK_DATA, 19, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer4_mask_data_fields
+ },
+ { CPY_WRITER5_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer5_ctrl_fields },
+ { CPY_WRITER5_DATA, 21, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer5_data_fields },
+ {
+ CPY_WRITER5_MASK_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer5_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER5_MASK_DATA, 23, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer5_mask_data_fields
+ },
+};
+
static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
{ CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2279,6 +2480,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
+ { MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2437,5 +2639,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 865dd6a084..0ab5ae0310 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -15,6 +15,7 @@
#define MOD_UNKNOWN (0L)/* Unknown/uninitialized - keep this as the first element */
#define MOD_CAT (0x30b447c2UL)
+#define MOD_CPY (0x1ddc186fUL)
#define MOD_CSU (0x3f470787UL)
#define MOD_DBS (0x80b29727UL)
#define MOD_FLM (0xe7ba53a4UL)
@@ -46,7 +47,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (33)
+#define MOD_IDX_COUNT (34)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 50/73] net/ntnic: add Tx INS module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (48 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
` (22 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Inserter module injects zeros into an offset of a packet,
effectively expanding the packet.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 19 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 60fd748ea2..c8841b1dc2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1457,6 +1457,22 @@ static nthw_fpga_register_init_s iic_registers[] = {
{ IIC_TX_FIFO_OCY, 69, 4, NTHW_FPGA_REG_TYPE_RO, 0, 1, iic_tx_fifo_ocy_fields },
};
+static nthw_fpga_field_init_s ins_rcp_ctrl_fields[] = {
+ { INS_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { INS_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ins_rcp_data_fields[] = {
+ { INS_RCP_DATA_DYN, 5, 0, 0x0000 },
+ { INS_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { INS_RCP_DATA_OFS, 10, 5, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ins_registers[] = {
+ { INS_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ins_rcp_ctrl_fields },
+ { INS_RCP_DATA, 1, 23, NTHW_FPGA_REG_TYPE_WO, 0, 3, ins_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s km_cam_ctrl_fields[] = {
{ KM_CAM_CTRL_ADR, 13, 0, 0x0000 },
{ KM_CAM_CTRL_CNT, 16, 16, 0x0000 },
@@ -2481,6 +2497,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
+ { MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2639,5 +2656,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0ab5ae0310..8c0c727e16 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -28,6 +28,7 @@
#define MOD_I2CM (0x93bc7780UL)
#define MOD_IFR (0x9b01f1e6UL)
#define MOD_IIC (0x7629cddbUL)
+#define MOD_INS (0x24df4b78UL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
@@ -47,7 +48,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (34)
+#define MOD_IDX_COUNT (35)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 51/73] net/ntnic: add Tx RPL module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (49 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
@ 2024-10-23 16:59 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
` (21 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 16:59 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Replacer module can replace a range of bytes in a packet.
The replacing data is stored in a table in the module
and will often contain tunnel data.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index c8841b1dc2..a3d9f94fc6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2355,6 +2355,44 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpl_ext_ctrl_fields[] = {
+ { RPL_EXT_CTRL_ADR, 10, 0, 0x0000 },
+ { RPL_EXT_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_ext_data_fields[] = {
+ { RPL_EXT_DATA_RPL_PTR, 12, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_ctrl_fields[] = {
+ { RPL_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPL_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_data_fields[] = {
+ { RPL_RCP_DATA_DYN, 5, 0, 0x0000 }, { RPL_RCP_DATA_ETH_TYPE_WR, 1, 36, 0x0000 },
+ { RPL_RCP_DATA_EXT_PRIO, 1, 35, 0x0000 }, { RPL_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { RPL_RCP_DATA_OFS, 10, 5, 0x0000 }, { RPL_RCP_DATA_RPL_PTR, 12, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_ctrl_fields[] = {
+ { RPL_RPL_CTRL_ADR, 12, 0, 0x0000 },
+ { RPL_RPL_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_data_fields[] = {
+ { RPL_RPL_DATA_VALUE, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpl_registers[] = {
+ { RPL_EXT_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_ext_ctrl_fields },
+ { RPL_EXT_DATA, 3, 12, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_ext_data_fields },
+ { RPL_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rcp_ctrl_fields },
+ { RPL_RCP_DATA, 1, 37, NTHW_FPGA_REG_TYPE_WO, 0, 6, rpl_rcp_data_fields },
+ { RPL_RPL_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rpl_ctrl_fields },
+ { RPL_RPL_DATA, 5, 128, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_rpl_data_fields },
+};
+
static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
{ RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2498,6 +2536,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
+ { MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2656,5 +2695,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 8c0c727e16..2b059d98ff 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -40,6 +40,7 @@
#define MOD_QSL (0x448ed859UL)
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
+#define MOD_RPL (0x6de535c3UL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
@@ -48,7 +49,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (35)
+#define MOD_IDX_COUNT (36)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 52/73] net/ntnic: update alignment for virt queue structs
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (50 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
` (20 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Update incorrect alignment
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Fix __rte_packed usage
Original NT PMD driver use pragma pack(1) wich is similar with
combination attributes packed and aligned
In this case aligned(1) can be ignored in case of use
attribute packed
---
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
index bde0fed273..e46a3bef28 100644
--- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
+++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <rte_common.h>
#include <unistd.h>
#include "ntos_drv.h"
@@ -67,20 +68,20 @@
} \
} while (0)
-struct __rte_aligned(8) virtq_avail {
+struct __rte_packed virtq_avail {
uint16_t flags;
uint16_t idx;
uint16_t ring[]; /* Queue Size */
};
-struct __rte_aligned(8) virtq_used_elem {
+struct __rte_packed virtq_used_elem {
/* Index of start of used descriptor chain. */
uint32_t id;
/* Total length of the descriptor chain which was used (written to) */
uint32_t len;
};
-struct __rte_aligned(8) virtq_used {
+struct __rte_packed virtq_used {
uint16_t flags;
uint16_t idx;
struct virtq_used_elem ring[]; /* Queue Size */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 53/73] net/ntnic: enable RSS feature
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (51 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-28 16:15 ` Stephen Hemminger
2024-10-23 17:00 ` [PATCH v3 54/73] net/ntnic: add statistics API Serhii Iliushyk
` (19 subsequent siblings)
72 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit
Enable receive side scaling
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 3 +
drivers/net/ntnic/include/create_elements.h | 1 +
drivers/net/ntnic/include/flow_api.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 6 ++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 77 +++++++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 73 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 212 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4cb9509742..e5d5abd0ed 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -10,6 +10,8 @@ Link status = Y
Queue start/stop = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
Linux = Y
x86-64 = Y
@@ -37,3 +39,4 @@ port_id = Y
queue = Y
raw_decap = Y
raw_encap = Y
+rss = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 70e6cad195..eaa578e72a 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,7 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_rss flow_rss;
struct flow_action_raw_encap encap;
struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 2e96fa5bed..4a1525f237 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -114,6 +114,8 @@ struct flow_nic_dev {
struct flow_eth_dev *eth_base;
pthread_mutex_t mtx;
+ /* RSS hashing configuration */
+ struct nt_eth_rss_conf rss_conf;
/* next NIC linked list */
struct flow_nic_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34f2cad2cd..d61044402d 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1061,6 +1061,12 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+
+ /*
+ * Other
+ */
+ .hw_mod_hsh_rcp_flush = hw_mod_hsh_rcp_flush,
+ .flow_nic_set_hasher_fields = flow_nic_set_hasher_fields,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1dfd96eaac..bbf450697c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -603,6 +603,49 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RSS", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_rss rss_tmp;
+ const struct rte_flow_action_rss *rss =
+ memcpy_mask_if(&rss_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_rss));
+
+ if (rss->key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: RSS hash key length %u exceeds maximum value %u",
+ rss->key_len, MAX_RSS_KEY_LEN);
+ flow_nic_set_error(ERR_RSS_TOO_LONG_KEY, error);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < rss->queue_num; ++i) {
+ int hw_id = rx_queue_idx_to_hw_id(dev, rss->queue[i]);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+ }
+
+ fd->hsh.func = rss->func;
+ fd->hsh.types = rss->types;
+ fd->hsh.key = rss->key;
+ fd->hsh.key_len = rss->key_len;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RSS func: %d, types: 0x%" PRIX64 ", key_len: %d",
+ dev, rss->func, rss->types, rss->key_len);
+
+ fd->full_offload = 0;
+ *num_queues += rss->queue_num;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MARK:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bfca8f28b1..1b25621537 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -214,6 +214,14 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_rx_pktlen = HW_MAX_PKT_LEN;
dev_info->max_mtu = MAX_MTU;
+ if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
+ dev_info->hash_key_size = MAX_RSS_KEY_LEN;
+
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ);
+ }
+
if (internals->p_drv) {
dev_info->max_rx_queues = internals->nb_rx_queues;
dev_info->max_tx_queues = internals->nb_tx_queues;
@@ -1372,6 +1380,73 @@ promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
return 0;
}
+static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+ struct nt_eth_rss_conf tmp_rss_conf = { 0 };
+ const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
+ int res = 0;
+
+ if (rss_conf->rss_key != NULL) {
+ if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, NTNIC,
+ "ERROR: - RSS hash key length %u exceeds maximum value %u",
+ rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ return -1;
+ }
+
+ rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+ }
+
+ tmp_rss_conf.algorithm = rss_conf->algorithm;
+
+ tmp_rss_conf.rss_hf = rss_conf->rss_hf;
+ res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
+
+ if (res == 0) {
+ flow_filter_ops->hw_mod_hsh_rcp_flush(&ndev->be, hsh_idx, 1);
+ rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
+
+ } else {
+ NT_LOG(ERR, NTNIC, "ERROR: - RSS hash update failed with error %i", res);
+ }
+
+ return res;
+}
+
+static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+
+ rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
+
+ rss_conf->rss_hf = ndev->rss_conf.rss_hf;
+
+ /*
+ * copy full stored key into rss_key and pad it with
+ * zeros up to rss_key_len / MAX_RSS_KEY_LEN
+ */
+ if (rss_conf->rss_key != NULL) {
+ int key_len = rss_conf->rss_key_len < MAX_RSS_KEY_LEN ? rss_conf->rss_key_len
+ : MAX_RSS_KEY_LEN;
+ memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
+ rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
+ rss_conf->rss_key_len = key_len;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
@@ -1395,6 +1470,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
+ .rss_hash_update = eth_dev_rss_hash_update,
+ .rss_hash_conf_get = rss_hash_conf_get,
};
/*
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 87b26bd315..4962ab8d5a 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -317,6 +317,79 @@ int create_action_elements_inline(struct cnv_action_s *action,
* Non-compatible actions handled here
*/
switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)actions[aidx].conf;
+
+ switch (rss->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ;
+
+ if (rte_is_power_of_2(rss->queue_num) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - for Toeplitz the number of queues must be power of two");
+ return -1;
+ }
+
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT:
+ case RTE_ETH_HASH_FUNCTION_MAX:
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported function: %u",
+ rss->func);
+ return -1;
+ }
+
+ uint64_t tmp_rss_types = 0;
+
+ switch (rss->level) {
+ case 1:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_OUTERMOST;
+ break;
+
+ case 2:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_INNERMOST;
+ break;
+
+ case 0:
+ /* keep level mask specified at types */
+ action->flow_rss.types = rss->types;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported level: %u",
+ rss->level);
+ return -1;
+ }
+
+ action->flow_rss.level = 0;
+ action->flow_rss.key_len = rss->key_len;
+ action->flow_rss.queue_num = rss->queue_num;
+ action->flow_rss.key = rss->key;
+ action->flow_rss.queue = rss->queue;
+ action->flow_actions[aidx].conf = &action->flow_rss;
+ }
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
const struct rte_flow_action_raw_decap *decap =
(const struct rte_flow_action_raw_decap *)actions[aidx]
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 12baa13800..e40ed9b949 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -316,6 +316,13 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+
+ /*
+ * Other
+ */
+ int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+ int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 54/73] net/ntnic: add statistics API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (52 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 55/73] net/ntnic: add rpf module Serhii Iliushyk
` (18 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Statistics init, setup, get, reset APIs and their
implementation were added.
Statistics fpga defines were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 192 +++++++++
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 149 +++++++
drivers/net/ntnic/include/ntos_drv.h | 9 +
.../ntnic/include/stream_binary_flow_api.h | 5 +
drivers/net/ntnic/meson.build | 3 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 1 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 10 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 370 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 40 ++
drivers/net/ntnic/ntnic_ethdev.c | 119 +++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 132 +++++++
drivers/net/ntnic/ntnic_mod_reg.c | 30 ++
drivers/net/ntnic/ntnic_mod_reg.h | 17 +
drivers/net/ntnic/ntutil/nt_util.h | 1 +
21 files changed, 1119 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_adapter.c b/drivers/net/ntnic/adapter/nt4ga_adapter.c
index d9e6716c30..fa72dfda8d 100644
--- a/drivers/net/ntnic/adapter/nt4ga_adapter.c
+++ b/drivers/net/ntnic/adapter/nt4ga_adapter.c
@@ -212,19 +212,26 @@ static int nt4ga_adapter_init(struct adapter_info_s *p_adapter_info)
}
}
- nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
- if (p_nthw_rmc == NULL) {
- NT_LOG(ERR, NTNIC, "Failed to allocate memory for RMC module");
- return -1;
- }
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
- res = nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
- if (res) {
- NT_LOG(ERR, NTNIC, "Failed to initialize RMC module");
- return -1;
- }
+ if (nt4ga_stat_ops != NULL) {
+ /* Nt4ga Stat init/setup */
+ res = nt4ga_stat_ops->nt4ga_stat_init(p_adapter_info);
+
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot initialize the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+
+ res = nt4ga_stat_ops->nt4ga_stat_setup(p_adapter_info);
- nthw_rmc_unblock(p_nthw_rmc, false);
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot setup the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+ }
return 0;
}
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
new file mode 100644
index 0000000000..0e20f3ea45
--- /dev/null
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -0,0 +1,192 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+#include "nt_util.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "nthw_fpga_param_defs.h"
+#include "nt4ga_adapter.h"
+#include "ntnic_nim.h"
+#include "flow_filter.h"
+#include "ntnic_mod_reg.h"
+
+#define DEFAULT_MAX_BPS_SPEED 100e9
+
+static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
+{
+ const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
+ fpga_info_t *fpga_info = &p_adapter_info->fpga_info;
+ nthw_fpga_t *p_fpga = fpga_info->mp_fpga;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+
+ if (p_nt4ga_stat) {
+ memset(p_nt4ga_stat, 0, sizeof(nt4ga_stat_t));
+
+ } else {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ {
+ nthw_stat_t *p_nthw_stat = nthw_stat_new();
+
+ if (!p_nthw_stat) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ if (nthw_rmc_init(NULL, p_fpga, 0) == 0) {
+ nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
+
+ if (!p_nthw_rmc) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
+ p_nt4ga_stat->mp_nthw_rmc = p_nthw_rmc;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rmc = NULL;
+ }
+
+ p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
+ nthw_stat_init(p_nthw_stat, p_fpga, 0);
+
+ p_nt4ga_stat->mn_rx_host_buffers = p_nthw_stat->m_nb_rx_host_buffers;
+ p_nt4ga_stat->mn_tx_host_buffers = p_nthw_stat->m_nb_tx_host_buffers;
+
+ p_nt4ga_stat->mn_rx_ports = p_nthw_stat->m_nb_rx_ports;
+ p_nt4ga_stat->mn_tx_ports = p_nthw_stat->m_nb_tx_ports;
+ }
+
+ return 0;
+}
+
+static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
+{
+ const int n_physical_adapter_no = p_adapter_info->adapter_no;
+ (void)n_physical_adapter_no;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+
+ /* Allocate and map memory for fpga statistics */
+ {
+ uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
+ sizeof(p_nthw_stat->mp_timestamp));
+ struct nt_dma_s *p_dma;
+ int numa_node = p_adapter_info->fpga_info.numa_node;
+
+ /* FPGA needs a 16K alignment on Statistics */
+ p_dma = nt_dma_alloc(n_stat_size, 0x4000, numa_node);
+
+ if (!p_dma) {
+ NT_LOG_DBGX(ERR, NTNIC, "p_dma alloc failed");
+ return -1;
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%x @%d %" PRIx64 " %" PRIx64, n_stat_size, numa_node,
+ p_dma->addr, p_dma->iova);
+
+ NT_LOG(DBG, NTNIC,
+ "DMA: Physical adapter %02d, PA = 0x%016" PRIX64 " DMA = 0x%016" PRIX64
+ " size = 0x%" PRIX32 "",
+ n_physical_adapter_no, p_dma->iova, p_dma->addr, n_stat_size);
+
+ p_nt4ga_stat->p_stat_dma_virtual = (uint32_t *)p_dma->addr;
+ p_nt4ga_stat->n_stat_size = n_stat_size;
+ p_nt4ga_stat->p_stat_dma = p_dma;
+
+ memset(p_nt4ga_stat->p_stat_dma_virtual, 0xaa, n_stat_size);
+ nthw_stat_set_dma_address(p_nthw_stat, p_dma->iova,
+ p_nt4ga_stat->p_stat_dma_virtual);
+ }
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+
+ p_nt4ga_stat->mp_stat_structs_color =
+ calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_color) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_hb =
+ calloc(p_nt4ga_stat->mn_rx_host_buffers + p_nt4ga_stat->mn_tx_host_buffers,
+ sizeof(struct host_buffer_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_hb) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_rx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_tx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_port_load =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
+
+ if (!p_nt4ga_stat->mp_port_load) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+#ifdef NIM_TRIGGER
+ uint64_t max_bps_speed = nt_get_max_link_speed(p_adapter_info->nt4ga_link.speed_capa);
+
+ if (max_bps_speed == 0)
+ max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+
+#else
+ uint64_t max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+ NT_LOG(ERR, NTNIC, "NIM module not included");
+#endif
+
+ for (int p = 0; p < NUM_ADAPTER_PORTS_MAX; p++) {
+ p_nt4ga_stat->mp_port_load[p].rx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].tx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].rx_pps_max = max_bps_speed / (8 * (20 + 64));
+ p_nt4ga_stat->mp_port_load[p].tx_pps_max = max_bps_speed / (8 * (20 + 64));
+ }
+
+ memset(p_nt4ga_stat->a_stat_structs_color_base, 0,
+ sizeof(struct color_counters) * NT_MAX_COLOR_FLOW_STATS);
+ p_nt4ga_stat->last_timestamp = 0;
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ return 0;
+}
+
+static struct nt4ga_stat_ops ops = {
+ .nt4ga_stat_init = nt4ga_stat_init,
+ .nt4ga_stat_setup = nt4ga_stat_setup,
+};
+
+void nt4ga_stat_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "Stat module was initialized");
+ register_nt4ga_stat_ops(&ops);
+}
diff --git a/drivers/net/ntnic/include/common_adapter_defs.h b/drivers/net/ntnic/include/common_adapter_defs.h
new file mode 100644
index 0000000000..6ed9121f0f
--- /dev/null
+++ b/drivers/net/ntnic/include/common_adapter_defs.h
@@ -0,0 +1,15 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _COMMON_ADAPTER_DEFS_H_
+#define _COMMON_ADAPTER_DEFS_H_
+
+/*
+ * Declarations shared by NT adapter types.
+ */
+#define NUM_ADAPTER_MAX (8)
+#define NUM_ADAPTER_PORTS_MAX (128)
+
+#endif /* _COMMON_ADAPTER_DEFS_H_ */
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index eaa578e72a..1456977837 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -46,6 +46,10 @@ struct rte_flow {
uint32_t flow_stat_id;
+ uint64_t stat_pkts;
+ uint64_t stat_bytes;
+ uint8_t stat_tcp_flags;
+
uint16_t caller_id;
};
diff --git a/drivers/net/ntnic/include/nt4ga_adapter.h b/drivers/net/ntnic/include/nt4ga_adapter.h
index 809135f130..fef79ce358 100644
--- a/drivers/net/ntnic/include/nt4ga_adapter.h
+++ b/drivers/net/ntnic/include/nt4ga_adapter.h
@@ -6,6 +6,7 @@
#ifndef _NT4GA_ADAPTER_H_
#define _NT4GA_ADAPTER_H_
+#include "ntnic_stat.h"
#include "nt4ga_link.h"
typedef struct hw_info_s {
@@ -30,6 +31,7 @@ typedef struct hw_info_s {
#include "ntnic_stat.h"
typedef struct adapter_info_s {
+ struct nt4ga_stat_s nt4ga_stat;
struct nt4ga_filter_s nt4ga_filter;
struct nt4ga_link_s nt4ga_link;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8ebdd98db0..1135e9a539 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -15,6 +15,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
+ pthread_mutex_t stat_lck;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 148088fe1d..2aee3f8425 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -6,6 +6,155 @@
#ifndef NTNIC_STAT_H_
#define NTNIC_STAT_H_
+#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_fpga_model.h"
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+struct nthw_stat {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_stat;
+ int mn_instance;
+
+ int mn_stat_layout_version;
+
+ bool mb_has_tx_stats;
+
+ int m_nb_phy_ports;
+ int m_nb_nim_ports;
+
+ int m_nb_rx_ports;
+ int m_nb_tx_ports;
+
+ int m_nb_rx_host_buffers;
+ int m_nb_tx_host_buffers;
+
+ int m_dbs_present;
+
+ int m_rx_port_replicate;
+
+ int m_nb_color_counters;
+
+ int m_nb_rx_hb_counters;
+ int m_nb_tx_hb_counters;
+
+ int m_nb_rx_port_counters;
+ int m_nb_tx_port_counters;
+
+ int m_nb_counters;
+
+ int m_nb_rpp_per_ps;
+
+ nthw_field_t *mp_fld_dma_ena;
+ nthw_field_t *mp_fld_cnt_clear;
+
+ nthw_field_t *mp_fld_tx_disable;
+
+ nthw_field_t *mp_fld_cnt_freeze;
+
+ nthw_field_t *mp_fld_stat_toggle_missed;
+
+ nthw_field_t *mp_fld_dma_lsb;
+ nthw_field_t *mp_fld_dma_msb;
+
+ nthw_field_t *mp_fld_load_bin;
+ nthw_field_t *mp_fld_load_bps_rx0;
+ nthw_field_t *mp_fld_load_bps_rx1;
+ nthw_field_t *mp_fld_load_bps_tx0;
+ nthw_field_t *mp_fld_load_bps_tx1;
+ nthw_field_t *mp_fld_load_pps_rx0;
+ nthw_field_t *mp_fld_load_pps_rx1;
+ nthw_field_t *mp_fld_load_pps_tx0;
+ nthw_field_t *mp_fld_load_pps_tx1;
+
+ uint64_t m_stat_dma_physical;
+ uint32_t *mp_stat_dma_virtual;
+
+ uint64_t *mp_timestamp;
+};
+
+typedef struct nthw_stat nthw_stat_t;
+typedef struct nthw_stat nthw_stat;
+
+struct color_counters {
+ uint64_t color_packets;
+ uint64_t color_bytes;
+ uint8_t tcp_flags;
+};
+
+struct host_buffer_counters {
+};
+
+struct port_load_counters {
+ uint64_t rx_pps_max;
+ uint64_t tx_pps_max;
+ uint64_t rx_bps_max;
+ uint64_t tx_bps_max;
+};
+
+struct port_counters_v2 {
+};
+
+struct flm_counters_v1 {
+};
+
+struct nt4ga_stat_s {
+ nthw_stat_t *mp_nthw_stat;
+ nthw_rmc_t *mp_nthw_rmc;
+ struct nt_dma_s *p_stat_dma;
+ uint32_t *p_stat_dma_virtual;
+ uint32_t n_stat_size;
+
+ uint64_t last_timestamp;
+
+ int mn_rx_host_buffers;
+ int mn_tx_host_buffers;
+
+ int mn_rx_ports;
+ int mn_tx_ports;
+
+ struct color_counters *mp_stat_structs_color;
+ /* For calculating increments between stats polls */
+ struct color_counters a_stat_structs_color_base[NT_MAX_COLOR_FLOW_STATS];
+
+ /* Port counters for inline */
+ struct {
+ struct port_counters_v2 *mp_stat_structs_port_rx;
+ struct port_counters_v2 *mp_stat_structs_port_tx;
+ } cap;
+
+ struct host_buffer_counters *mp_stat_structs_hb;
+ struct port_load_counters *mp_port_load;
+
+ /* Rx/Tx totals: */
+ uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
+
+ uint64_t a_port_rx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ /* Base is for calculating increments between statistics reads */
+ uint64_t a_port_rx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_packets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_packets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_drops_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_drops_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+};
+
+typedef struct nt4ga_stat_s nt4ga_stat_t;
+
+nthw_stat_t *nthw_stat_new(void);
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_stat_delete(nthw_stat_t *p);
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual);
+int nthw_stat_trigger(nthw_stat_t *p);
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 8fd577dfe3..7b3c8ff3d6 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -57,6 +57,9 @@ struct __rte_cache_aligned ntnic_rx_queue {
struct flow_queue_id_s queue; /* queue info - user id and hw queue index */
struct rte_mempool *mb_pool; /* mbuf memory pool */
uint16_t buf_size; /* Size of data area in mbuf */
+ unsigned long rx_pkts; /* Rx packet statistics */
+ unsigned long rx_bytes; /* Rx bytes statistics */
+ unsigned long err_pkts; /* Rx error packet statistics */
int enabled; /* Enabling/disabling of this queue */
struct hwq_s hwq;
@@ -80,6 +83,9 @@ struct __rte_cache_aligned ntnic_tx_queue {
int rss_target_id;
uint32_t port; /* Tx port for this queue */
+ unsigned long tx_pkts; /* Tx packet statistics */
+ unsigned long tx_bytes; /* Tx bytes statistics */
+ unsigned long err_pkts; /* Tx error packet stat */
int enabled; /* Enabling/disabling of this queue */
enum fpga_info_profile profile; /* Inline / Capture */
};
@@ -95,6 +101,7 @@ struct pmd_internals {
/* Offset of the VF from the PF */
uint8_t vf_offset;
uint32_t port;
+ uint32_t port_id;
nt_meta_port_type_t type;
struct flow_queue_id_s vpq[MAX_QUEUES];
unsigned int vpq_nb_vq;
@@ -107,6 +114,8 @@ struct pmd_internals {
struct rte_ether_addr eth_addrs[NUM_MAC_ADDRS_PER_PORT];
/* Multicast ethernet (MAC) addresses. */
struct rte_ether_addr mc_addrs[NUM_MULTICAST_ADDRS_PER_PORT];
+ uint64_t last_stat_rtc;
+ uint64_t rx_missed;
struct pmd_internals *next;
};
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index e5fe686d99..4ce1561033 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,7 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include <rte_ether.h>
#include "rte_flow.h"
#include "rte_flow_driver.h"
@@ -44,6 +45,10 @@
#define FLOW_MAX_QUEUES 128
#define RAW_ENCAP_DECAP_ELEMS_MAX 16
+
+extern uint64_t rte_tsc_freq;
+extern rte_spinlock_t hwlock;
+
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 92167d24e4..216341bb11 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -25,10 +25,12 @@ includes = [
# all sources
sources = files(
'adapter/nt4ga_adapter.c',
+ 'adapter/nt4ga_stat/nt4ga_stat.c',
'dbsconfig/ntnic_dbsconfig.c',
'link_mgmt/link_100g/nt4ga_link_100g.c',
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
+ 'ntnic_filter/ntnic_filter.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
@@ -48,6 +50,7 @@ sources = files(
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
+ 'nthw/stat/nthw_stat.c',
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index 2345820bdc..b239752674 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -44,6 +44,7 @@ typedef struct nthw_rmc nthw_rmc;
nthw_rmc_t *nthw_rmc_new(void);
int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 4a01424c24..748519aeb4 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,16 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+void nthw_rmc_block(nthw_rmc_t *p)
+{
+ /* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
+ if (!p->mb_administrative_block) {
+ nthw_field_set_flush(p->mp_fld_ctrl_block_stat_drop);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_keep_alive);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_mac_port);
+ }
+}
+
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary)
{
uint32_t n_block_mask = ~0U << (b_is_secondary ? p->mn_nims : p->mn_ports);
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
new file mode 100644
index 0000000000..6adcd2e090
--- /dev/null
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -0,0 +1,370 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "nt_util.h"
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "ntnic_stat.h"
+
+#include <malloc.h>
+
+nthw_stat_t *nthw_stat_new(void)
+{
+ nthw_stat_t *p = malloc(sizeof(nthw_stat_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_stat_t));
+
+ return p;
+}
+
+void nthw_stat_delete(nthw_stat_t *p)
+{
+ if (p)
+ free(p);
+}
+
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ uint64_t n_module_version_packed64 = -1;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_STA, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: STAT %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_stat = mod;
+
+ n_module_version_packed64 = nthw_module_get_version_packed64(p->mp_mod_stat);
+ NT_LOG(DBG, NTHW, "%s: STAT %d: version=0x%08lX", p_adapter_id_str, p->mn_instance,
+ n_module_version_packed64);
+
+ {
+ nthw_register_t *p_reg;
+ /* STA_CFG register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_CFG);
+ p->mp_fld_dma_ena = nthw_register_get_field(p_reg, STA_CFG_DMA_ENA);
+ p->mp_fld_cnt_clear = nthw_register_get_field(p_reg, STA_CFG_CNT_CLEAR);
+
+ /* CFG: fields NOT available from v. 3 */
+ p->mp_fld_tx_disable = nthw_register_query_field(p_reg, STA_CFG_TX_DISABLE);
+ p->mp_fld_cnt_freeze = nthw_register_query_field(p_reg, STA_CFG_CNT_FRZ);
+
+ /* STA_STATUS register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_STATUS);
+ p->mp_fld_stat_toggle_missed =
+ nthw_register_get_field(p_reg, STA_STATUS_STAT_TOGGLE_MISSED);
+
+ /* HOST_ADR registers */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_LSB);
+ p->mp_fld_dma_lsb = nthw_register_get_field(p_reg, STA_HOST_ADR_LSB_LSB);
+
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_MSB);
+ p->mp_fld_dma_msb = nthw_register_get_field(p_reg, STA_HOST_ADR_MSB_MSB);
+
+ /* Binning cycles */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BIN);
+
+ if (p_reg) {
+ p->mp_fld_load_bin = nthw_register_get_field(p_reg, STA_LOAD_BIN_BIN);
+
+ /* Bandwidth load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx0 = NULL;
+ }
+
+ /* Bandwidth load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx1 = NULL;
+ }
+
+ /* Bandwidth load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx0 = NULL;
+ }
+
+ /* Bandwidth load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx1 = NULL;
+ }
+
+ /* Packet load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx0 = NULL;
+ }
+
+ /* Packet load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx1 = NULL;
+ }
+
+ /* Packet load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx0 = NULL;
+ }
+
+ /* Packet load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+
+ } else {
+ p->mp_fld_load_bin = NULL;
+ p->mp_fld_load_bps_rx0 = NULL;
+ p->mp_fld_load_bps_rx1 = NULL;
+ p->mp_fld_load_bps_tx0 = NULL;
+ p->mp_fld_load_bps_tx1 = NULL;
+ p->mp_fld_load_pps_rx0 = NULL;
+ p->mp_fld_load_pps_rx1 = NULL;
+ p->mp_fld_load_pps_tx0 = NULL;
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+ }
+
+ /* Params */
+ p->m_nb_nim_ports = nthw_fpga_get_product_param(p_fpga, NT_NIMS, 0);
+ p->m_nb_phy_ports = nthw_fpga_get_product_param(p_fpga, NT_PHY_PORTS, 0);
+
+ /* VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_STA_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_PORTS, 0);
+ }
+ }
+
+ p->m_nb_rpp_per_ps = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+
+ p->m_nb_tx_ports = nthw_fpga_get_product_param(p_fpga, NT_TX_PORTS, 0);
+ p->m_rx_port_replicate = nthw_fpga_get_product_param(p_fpga, NT_RX_PORT_REPLICATE, 0);
+
+ /* VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_STA_COLORS, 64) * 2;
+
+ if (p->m_nb_color_counters == 0) {
+ /* non-VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_CAT_FUNCS, 0) * 2;
+ }
+
+ p->m_nb_rx_host_buffers = nthw_fpga_get_product_param(p_fpga, NT_QUEUES, 0);
+ p->m_nb_tx_host_buffers = p->m_nb_rx_host_buffers;
+
+ p->m_dbs_present = nthw_fpga_get_product_param(p_fpga, NT_DBS_PRESENT, 0);
+
+ p->m_nb_rx_hb_counters = (p->m_nb_rx_host_buffers * (6 + 2 *
+ (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ?
+ p->m_dbs_present : 0)));
+
+ p->m_nb_tx_hb_counters = 0;
+
+ p->m_nb_rx_port_counters = 42 +
+ 2 * (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ? p->m_dbs_present : 0);
+ p->m_nb_tx_port_counters = 0;
+
+ p->m_nb_counters =
+ p->m_nb_color_counters + p->m_nb_rx_hb_counters + p->m_nb_tx_hb_counters;
+
+ p->mn_stat_layout_version = 0;
+
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 9)) {
+ p->mn_stat_layout_version = 7;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 8)) {
+ p->mn_stat_layout_version = 6;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->mn_stat_layout_version = 5;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 4)) {
+ p->mn_stat_layout_version = 4;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 3)) {
+ p->mn_stat_layout_version = 3;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 2)) {
+ p->mn_stat_layout_version = 2;
+
+ } else if (n_module_version_packed64 > VERSION_PACKED64(0, 0)) {
+ p->mn_stat_layout_version = 1;
+
+ } else {
+ p->mn_stat_layout_version = 0;
+ NT_LOG(ERR, NTHW, "%s: unknown module_version 0x%08lX layout=%d",
+ p_adapter_id_str, n_module_version_packed64, p->mn_stat_layout_version);
+ }
+
+ assert(p->mn_stat_layout_version);
+
+ /* STA module 0.2+ adds IPF counters per port (Rx feature) */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 2))
+ p->m_nb_rx_port_counters += 6;
+
+ /* STA module 0.3+ adds TX stats */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3) || p->m_nb_tx_ports >= 1)
+ p->mb_has_tx_stats = true;
+
+ /* STA module 0.3+ adds TX stat counters */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3))
+ p->m_nb_tx_port_counters += 22;
+
+ /* STA module 0.4+ adds TX drop event counter */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 4))
+ p->m_nb_tx_port_counters += 1; /* TX drop event counter */
+
+ /*
+ * STA module 0.6+ adds pkt filter drop octets+pkts, retransmit and
+ * duplicate counters
+ */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->m_nb_rx_port_counters += 4;
+ p->m_nb_tx_port_counters += 1;
+ }
+
+ p->m_nb_counters += (p->m_nb_rx_ports * p->m_nb_rx_port_counters);
+
+ if (p->mb_has_tx_stats)
+ p->m_nb_counters += (p->m_nb_tx_ports * p->m_nb_tx_port_counters);
+
+ /* Output params (debug) */
+ NT_LOG(DBG, NTHW, "%s: nims=%d rxports=%d txports=%d rxrepl=%d colors=%d queues=%d",
+ p_adapter_id_str, p->m_nb_nim_ports, p->m_nb_rx_ports, p->m_nb_tx_ports,
+ p->m_rx_port_replicate, p->m_nb_color_counters, p->m_nb_rx_host_buffers);
+ NT_LOG(DBG, NTHW, "%s: hbs=%d hbcounters=%d rxcounters=%d txcounters=%d",
+ p_adapter_id_str, p->m_nb_rx_host_buffers, p->m_nb_rx_hb_counters,
+ p->m_nb_rx_port_counters, p->m_nb_tx_port_counters);
+ NT_LOG(DBG, NTHW, "%s: layout=%d", p_adapter_id_str, p->mn_stat_layout_version);
+ NT_LOG(DBG, NTHW, "%s: counters=%d (0x%X)", p_adapter_id_str, p->m_nb_counters,
+ p->m_nb_counters);
+
+ /* Init */
+ if (p->mp_fld_tx_disable)
+ nthw_field_set_flush(p->mp_fld_tx_disable);
+
+ nthw_field_update_register(p->mp_fld_cnt_clear);
+ nthw_field_set_flush(p->mp_fld_cnt_clear);
+ nthw_field_clr_flush(p->mp_fld_cnt_clear);
+
+ nthw_field_update_register(p->mp_fld_stat_toggle_missed);
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_clr_flush(p->mp_fld_dma_ena);
+ nthw_field_update_register(p->mp_fld_dma_ena);
+
+ /* Set the sliding windows size for port load */
+ if (p->mp_fld_load_bin) {
+ uint32_t rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ uint32_t bin =
+ (uint32_t)(((PORT_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) -
+ 1ULL);
+ nthw_field_set_val_flush32(p->mp_fld_load_bin, bin);
+ }
+
+ return 0;
+}
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual)
+{
+ assert(p_stat_dma_virtual);
+ p->mp_timestamp = NULL;
+
+ p->m_stat_dma_physical = stat_dma_physical;
+ p->mp_stat_dma_virtual = p_stat_dma_virtual;
+
+ memset(p->mp_stat_dma_virtual, 0, (p->m_nb_counters * sizeof(uint32_t)));
+
+ nthw_field_set_val_flush32(p->mp_fld_dma_msb,
+ (uint32_t)((p->m_stat_dma_physical >> 32) & 0xffffffff));
+ nthw_field_set_val_flush32(p->mp_fld_dma_lsb,
+ (uint32_t)(p->m_stat_dma_physical & 0xffffffff));
+
+ p->mp_timestamp = (uint64_t *)(p->mp_stat_dma_virtual + p->m_nb_counters);
+ NT_LOG(DBG, NTHW,
+ "stat_dma_physical=%" PRIX64 " p_stat_dma_virtual=%" PRIX64
+ " mp_timestamp=%" PRIX64 "", p->m_stat_dma_physical,
+ (uint64_t)p->mp_stat_dma_virtual, (uint64_t)p->mp_timestamp);
+ *p->mp_timestamp = (uint64_t)(int64_t)-1;
+ return 0;
+}
+
+int nthw_stat_trigger(nthw_stat_t *p)
+{
+ int n_toggle_miss = nthw_field_get_updated(p->mp_fld_stat_toggle_missed);
+
+ if (n_toggle_miss)
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ if (p->mp_timestamp)
+ *p->mp_timestamp = -1; /* Clear old ts */
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_set_flush(p->mp_fld_dma_ena);
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 2b059d98ff..ddc144dc02 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -46,6 +46,7 @@
#define MOD_SDC (0xd2369530UL)
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
+#define MOD_STA (0x76fae64dUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7741aa563f..8f196f885f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -45,6 +45,7 @@
#include "nthw_fpga_reg_defs_sdc.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
+#include "nthw_fpga_reg_defs_sta.h"
#include "nthw_fpga_reg_defs_tx_cpy.h"
#include "nthw_fpga_reg_defs_tx_ins.h"
#include "nthw_fpga_reg_defs_tx_rpl.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
new file mode 100644
index 0000000000..640ffcbc52
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -0,0 +1,40 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_STA_
+#define _NTHW_FPGA_REG_DEFS_STA_
+
+/* STA */
+#define STA_CFG (0xcecaf9f4UL)
+#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
+#define STA_CFG_CNT_FRZ (0x8c27a596UL)
+#define STA_CFG_DMA_ENA (0x940dbacUL)
+#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_HOST_ADR_LSB (0xde569336UL)
+#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
+#define STA_HOST_ADR_MSB (0xdf94f901UL)
+#define STA_HOST_ADR_MSB_MSB (0x114798c8UL)
+#define STA_LOAD_BIN (0x2e842591UL)
+#define STA_LOAD_BIN_BIN (0x1a2b942eUL)
+#define STA_LOAD_BPS_RX_0 (0xbf8f4595UL)
+#define STA_LOAD_BPS_RX_0_BPS (0x41647781UL)
+#define STA_LOAD_BPS_RX_1 (0xc8887503UL)
+#define STA_LOAD_BPS_RX_1_BPS (0x7c045e31UL)
+#define STA_LOAD_BPS_TX_0 (0x9ae41a49UL)
+#define STA_LOAD_BPS_TX_0_BPS (0x870b7e06UL)
+#define STA_LOAD_BPS_TX_1 (0xede32adfUL)
+#define STA_LOAD_BPS_TX_1_BPS (0xba6b57b6UL)
+#define STA_LOAD_PPS_RX_0 (0x811173c3UL)
+#define STA_LOAD_PPS_RX_0_PPS (0xbee573fcUL)
+#define STA_LOAD_PPS_RX_1 (0xf6164355UL)
+#define STA_LOAD_PPS_RX_1_PPS (0x83855a4cUL)
+#define STA_LOAD_PPS_TX_0 (0xa47a2c1fUL)
+#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
+#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
+#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_STATUS (0x91c5c51cUL)
+#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_STA_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 1b25621537..86876ecda6 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -65,6 +65,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+uint64_t rte_tsc_freq;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -88,7 +90,7 @@ static const struct rte_pci_id nthw_pci_id_map[] = {
static const struct sg_ops_s *sg_ops;
-static rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
+rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
/*
* Store and get adapter info
@@ -156,6 +158,102 @@ get_pdrv_from_pci(struct rte_pci_addr addr)
return p_drv;
}
+static int dpdk_stats_collect(struct pmd_internals *internals, struct rte_eth_stats *stats)
+{
+ const struct ntnic_filter_ops *ntnic_filter_ops = get_ntnic_filter_ops();
+
+ if (ntnic_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "ntnic_filter_ops uninitialized");
+ return -1;
+ }
+
+ unsigned int i;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t rx_total = 0;
+ uint64_t rx_total_b = 0;
+ uint64_t tx_total = 0;
+ uint64_t tx_total_b = 0;
+ uint64_t tx_err_total = 0;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || !stats || if_index < 0 ||
+ if_index > NUM_ADAPTER_PORTS_MAX) {
+ NT_LOG_DBGX(WRN, NTNIC, "error exit");
+ return -1;
+ }
+
+ /*
+ * Pull the latest port statistic numbers (Rx/Tx pkts and bytes)
+ * Return values are in the "internals->rxq_scg[]" and "internals->txq_scg[]" arrays
+ */
+ ntnic_filter_ops->poll_statistics(internals);
+
+ memset(stats, 0, sizeof(*stats));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_rx_queues; i++) {
+ stats->q_ipackets[i] = internals->rxq_scg[i].rx_pkts;
+ stats->q_ibytes[i] = internals->rxq_scg[i].rx_bytes;
+ rx_total += stats->q_ipackets[i];
+ rx_total_b += stats->q_ibytes[i];
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_tx_queues; i++) {
+ stats->q_opackets[i] = internals->txq_scg[i].tx_pkts;
+ stats->q_obytes[i] = internals->txq_scg[i].tx_bytes;
+ stats->q_errors[i] = internals->txq_scg[i].err_pkts;
+ tx_total += stats->q_opackets[i];
+ tx_total_b += stats->q_obytes[i];
+ tx_err_total += stats->q_errors[i];
+ }
+
+ stats->imissed = internals->rx_missed;
+ stats->ipackets = rx_total;
+ stats->ibytes = rx_total_b;
+ stats->opackets = tx_total;
+ stats->obytes = tx_total_b;
+ stats->oerrors = tx_err_total;
+
+ return 0;
+}
+
+static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s *p_nt_drv,
+ int n_intf_no)
+{
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ unsigned int i;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /* Rx */
+ for (i = 0; i < internals->nb_rx_queues; i++) {
+ internals->rxq_scg[i].rx_pkts = 0;
+ internals->rxq_scg[i].rx_bytes = 0;
+ internals->rxq_scg[i].err_pkts = 0;
+ }
+
+ internals->rx_missed = 0;
+
+ /* Tx */
+ for (i = 0; i < internals->nb_tx_queues; i++) {
+ internals->txq_scg[i].tx_pkts = 0;
+ internals->txq_scg[i].tx_bytes = 0;
+ internals->txq_scg[i].err_pkts = 0;
+ }
+
+ p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
+
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
static int
eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
{
@@ -194,6 +292,23 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return 0;
}
+static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ dpdk_stats_collect(internals, stats);
+ return 0;
+}
+
+static int eth_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ const int if_index = internals->n_intf_no;
+ dpdk_stats_reset(internals, p_nt_drv, if_index);
+ return 0;
+}
+
static int
eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info)
{
@@ -1455,6 +1570,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_set_link_down = eth_dev_set_link_down,
.dev_close = eth_dev_close,
.link_update = eth_link_update,
+ .stats_get = eth_stats_get,
+ .stats_reset = eth_stats_reset,
.dev_infos_get = eth_dev_infos_get,
.fw_version_get = eth_fw_version_get,
.rx_queue_setup = eth_rx_scg_queue_setup,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 4962ab8d5a..e2fce02afa 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -8,11 +8,19 @@
#include "create_elements.h"
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
+#include "ntos_drv.h"
#define MAX_RTE_FLOWS 8192
+#define MAX_COLOR_FLOW_STATS 0x400
#define NT_MAX_COLOR_FLOW_STATS 0x400
+#if (MAX_COLOR_FLOW_STATS != NT_MAX_COLOR_FLOW_STATS)
+#error Difference in COLOR_FLOW_STATS. Please synchronize the defines.
+#endif
+
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
@@ -668,6 +676,9 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
/* Cleanup recorded flows */
nt_flows[flow].used = 0;
nt_flows[flow].caller_id = 0;
+ nt_flows[flow].stat_bytes = 0UL;
+ nt_flows[flow].stat_pkts = 0UL;
+ nt_flows[flow].stat_tcp_flags = 0;
}
}
@@ -707,6 +718,127 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int poll_statistics(struct pmd_internals *internals)
+{
+ int flow;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t last_stat_rtc = 0;
+
+ if (!p_nt4ga_stat || if_index < 0 || if_index > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ assert(rte_tsc_freq > 0);
+
+ rte_spinlock_lock(&hwlock);
+
+ uint64_t now_rtc = rte_get_tsc_cycles();
+
+ /*
+ * Check per port max once a second
+ * if more than a second since last stat read, do a new one
+ */
+ if ((now_rtc - internals->last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ return 0;
+ }
+
+ internals->last_stat_rtc = now_rtc;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /*
+ * Add the RX statistics increments since last time we polled.
+ * (No difference if physical or virtual port)
+ */
+ internals->rxq_scg[0].rx_pkts += p_nt4ga_stat->a_port_rx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_packets_base[if_index];
+ internals->rxq_scg[0].rx_bytes += p_nt4ga_stat->a_port_rx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_octets_base[if_index];
+ internals->rxq_scg[0].err_pkts += 0;
+ internals->rx_missed += p_nt4ga_stat->a_port_rx_drops_total[if_index] -
+ p_nt4ga_stat->a_port_rx_drops_base[if_index];
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_rx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_packets_total[if_index];
+ p_nt4ga_stat->a_port_rx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_octets_total[if_index];
+ p_nt4ga_stat->a_port_rx_drops_base[if_index] =
+ p_nt4ga_stat->a_port_rx_drops_total[if_index];
+
+ /* Tx (here we must distinguish between physical and virtual ports) */
+ if (internals->type == PORT_TYPE_PHYSICAL) {
+ /* Add the statistics increments since last time we polled */
+ internals->txq_scg[0].tx_pkts += p_nt4ga_stat->a_port_tx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_packets_base[if_index];
+ internals->txq_scg[0].tx_bytes += p_nt4ga_stat->a_port_tx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_octets_base[if_index];
+ internals->txq_scg[0].err_pkts += 0;
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_tx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_packets_total[if_index];
+ p_nt4ga_stat->a_port_tx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_octets_total[if_index];
+ }
+
+ /* Globally only once a second */
+ if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return 0;
+ }
+
+ last_stat_rtc = now_rtc;
+
+ /* All color counter are global, therefore only 1 pmd must update them */
+ const struct color_counters *p_color_counters = p_nt4ga_stat->mp_stat_structs_color;
+ struct color_counters *p_color_counters_base = p_nt4ga_stat->a_stat_structs_color_base;
+ uint64_t color_packets_accumulated, color_bytes_accumulated;
+
+ for (flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used) {
+ unsigned int color = nt_flows[flow].flow_stat_id;
+
+ if (color < NT_MAX_COLOR_FLOW_STATS) {
+ color_packets_accumulated = p_color_counters[color].color_packets;
+ nt_flows[flow].stat_pkts +=
+ (color_packets_accumulated -
+ p_color_counters_base[color].color_packets);
+
+ nt_flows[flow].stat_tcp_flags |= p_color_counters[color].tcp_flags;
+
+ color_bytes_accumulated = p_color_counters[color].color_bytes;
+ nt_flows[flow].stat_bytes +=
+ (color_bytes_accumulated -
+ p_color_counters_base[color].color_bytes);
+
+ /* Update the counter bases */
+ p_color_counters_base[color].color_packets =
+ color_packets_accumulated;
+ p_color_counters_base[color].color_bytes = color_bytes_accumulated;
+ }
+ }
+ }
+
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
+static const struct ntnic_filter_ops ntnic_filter_ops = {
+ .poll_statistics = poll_statistics,
+};
+
+void ntnic_filter_init(void)
+{
+ register_ntnic_filter_ops(&ntnic_filter_ops);
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 593b56bf5b..355e2032b1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,21 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+static const struct ntnic_filter_ops *ntnic_filter_ops;
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
+{
+ ntnic_filter_ops = ops;
+}
+
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void)
+{
+ if (ntnic_filter_ops == NULL)
+ ntnic_filter_init();
+
+ return ntnic_filter_ops;
+}
+
static struct link_ops_s *link_100g_ops;
void register_100g_link_ops(struct link_ops_s *ops)
@@ -47,6 +62,21 @@ const struct port_ops *get_port_ops(void)
return port_ops;
}
+static const struct nt4ga_stat_ops *nt4ga_stat_ops;
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops)
+{
+ nt4ga_stat_ops = ops;
+}
+
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void)
+{
+ if (nt4ga_stat_ops == NULL)
+ nt4ga_stat_ops_init();
+
+ return nt4ga_stat_ops;
+}
+
static const struct adapter_ops *adapter_ops;
void register_adapter_ops(const struct adapter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e40ed9b949..30b9afb7d3 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -111,6 +111,14 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+struct ntnic_filter_ops {
+ int (*poll_statistics)(struct pmd_internals *internals);
+};
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops);
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void);
+void ntnic_filter_init(void);
+
struct link_ops_s {
int (*link_init)(struct adapter_info_s *p_adapter_info, nthw_fpga_t *p_fpga);
};
@@ -175,6 +183,15 @@ void register_port_ops(const struct port_ops *ops);
const struct port_ops *get_port_ops(void);
void port_init(void);
+struct nt4ga_stat_ops {
+ int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+};
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void);
+void nt4ga_stat_ops_init(void);
+
struct adapter_ops {
int (*init)(struct adapter_info_s *p_adapter_info);
int (*deinit)(struct adapter_info_s *p_adapter_info);
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index a482fb43ad..f2eccf3501 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -22,6 +22,7 @@
* The windows size must max be 3 min in order to
* prevent overflow.
*/
+#define PORT_LOAD_WINDOWS_SIZE 2ULL
#define FLM_LOAD_WINDOWS_SIZE 2ULL
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 55/73] net/ntnic: add rpf module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (53 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 54/73] net/ntnic: add statistics API Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 56/73] net/ntnic: add statistics poll Serhii Iliushyk
` (17 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Receive Port FIFO module controls the small FPGA FIFO
that packets are stored in before they enter the packet processor pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 25 +++-
drivers/net/ntnic/include/ntnic_stat.h | 2 +
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +++++++
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 ++++++++++++++++++
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 ++
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +++
10 files changed, 228 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 0e20f3ea45..f733fd5459 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -11,6 +11,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nim.h"
#include "flow_filter.h"
+#include "ntnic_stat.h"
#include "ntnic_mod_reg.h"
#define DEFAULT_MAX_BPS_SPEED 100e9
@@ -43,7 +44,7 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
if (!p_nthw_rmc) {
nthw_stat_delete(p_nthw_stat);
- NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ NT_LOG(ERR, NTNIC, "%s: ERROR rmc allocation", p_adapter_id_str);
return -1;
}
@@ -54,6 +55,22 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
p_nt4ga_stat->mp_nthw_rmc = NULL;
}
+ if (nthw_rpf_init(NULL, p_fpga, p_adapter_info->adapter_no) == 0) {
+ nthw_rpf_t *p_nthw_rpf = nthw_rpf_new();
+
+ if (!p_nthw_rpf) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rpf_init(p_nthw_rpf, p_fpga, p_adapter_info->adapter_no);
+ p_nt4ga_stat->mp_nthw_rpf = p_nthw_rpf;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rpf = NULL;
+ }
+
p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
nthw_stat_init(p_nthw_stat, p_fpga, 0);
@@ -77,6 +94,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_block(p_nt4ga_stat->mp_nthw_rpf);
+
/* Allocate and map memory for fpga statistics */
{
uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
@@ -112,6 +132,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_unblock(p_nt4ga_stat->mp_nthw_rpf);
+
p_nt4ga_stat->mp_stat_structs_color =
calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 2aee3f8425..ed24a892ec 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -8,6 +8,7 @@
#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_rpf.h"
#include "nthw_fpga_model.h"
#define NT_MAX_COLOR_FLOW_STATS 0x400
@@ -102,6 +103,7 @@ struct flm_counters_v1 {
struct nt4ga_stat_s {
nthw_stat_t *mp_nthw_stat;
nthw_rmc_t *mp_nthw_rmc;
+ nthw_rpf_t *mp_nthw_rpf;
struct nt_dma_s *p_stat_dma;
uint32_t *p_stat_dma_virtual;
uint32_t n_stat_size;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 216341bb11..ed5a201fd5 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_iic.c',
'nthw/core/nthw_mac_pcs.c',
'nthw/core/nthw_pcie3.c',
+ 'nthw/core/nthw_rpf.c',
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
new file mode 100644
index 0000000000..4c6c57ba55
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -0,0 +1,48 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTHW_RPF_HPP_
+#define NTHW_RPF_HPP_
+
+#include "nthw_fpga_model.h"
+#include "pthread.h"
+struct nthw_rpf {
+ nthw_fpga_t *mp_fpga;
+
+ nthw_module_t *m_mod_rpf;
+
+ int mn_instance;
+
+ nthw_register_t *mp_reg_control;
+ nthw_field_t *mp_fld_control_pen;
+ nthw_field_t *mp_fld_control_rpp_en;
+ nthw_field_t *mp_fld_control_st_tgl_en;
+ nthw_field_t *mp_fld_control_keep_alive_en;
+
+ nthw_register_t *mp_ts_sort_prg;
+ nthw_field_t *mp_fld_ts_sort_prg_maturing_delay;
+ nthw_field_t *mp_fld_ts_sort_prg_ts_at_eof;
+
+ int m_default_maturing_delay;
+ bool m_administrative_block; /* used to enforce license expiry */
+
+ pthread_mutex_t rpf_mutex;
+};
+
+typedef struct nthw_rpf nthw_rpf_t;
+typedef struct nthw_rpf nt_rpf;
+
+nthw_rpf_t *nthw_rpf_new(void);
+void nthw_rpf_delete(nthw_rpf_t *p);
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rpf_administrative_block(nthw_rpf_t *p);
+void nthw_rpf_block(nthw_rpf_t *p);
+void nthw_rpf_unblock(nthw_rpf_t *p);
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay);
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p);
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable);
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p);
+
+#endif
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
new file mode 100644
index 0000000000..81c704d01a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -0,0 +1,119 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+#include "nthw_rpf.h"
+
+nthw_rpf_t *nthw_rpf_new(void)
+{
+ nthw_rpf_t *p = malloc(sizeof(nthw_rpf_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_rpf_t));
+
+ return p;
+}
+
+void nthw_rpf_delete(nthw_rpf_t *p)
+{
+ if (p) {
+ memset(p, 0, sizeof(nthw_rpf_t));
+ free(p);
+ }
+}
+
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *p_mod = nthw_fpga_query_module(p_fpga, MOD_RPF, n_instance);
+
+ if (p == NULL)
+ return p_mod == NULL ? -1 : 0;
+
+ if (p_mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: MOD_RPF %d: no such instance",
+ p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance);
+ return -1;
+ }
+
+ p->m_mod_rpf = p_mod;
+
+ p->mp_fpga = p_fpga;
+
+ p->m_administrative_block = false;
+
+ /* CONTROL */
+ p->mp_reg_control = nthw_module_get_register(p->m_mod_rpf, RPF_CONTROL);
+ p->mp_fld_control_pen = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_PEN);
+ p->mp_fld_control_rpp_en = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_RPP_EN);
+ p->mp_fld_control_st_tgl_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_ST_TGL_EN);
+ p->mp_fld_control_keep_alive_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_KEEP_ALIVE_EN);
+
+ /* TS_SORT_PRG */
+ p->mp_ts_sort_prg = nthw_module_get_register(p->m_mod_rpf, RPF_TS_SORT_PRG);
+ p->mp_fld_ts_sort_prg_maturing_delay =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_MATURING_DELAY);
+ p->mp_fld_ts_sort_prg_ts_at_eof =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_TS_AT_EOF);
+ p->m_default_maturing_delay =
+ nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
+
+ /* Initialize mutex */
+ pthread_mutex_init(&p->rpf_mutex, NULL);
+ return 0;
+}
+
+void nthw_rpf_administrative_block(nthw_rpf_t *p)
+{
+ /* block all MAC ports */
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+
+ p->m_administrative_block = true;
+}
+
+void nthw_rpf_block(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+}
+
+void nthw_rpf_unblock(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+
+ nthw_field_set_val32(p->mp_fld_control_pen, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_rpp_en, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_st_tgl_en, 1);
+ nthw_field_set_val_flush32(p->mp_fld_control_keep_alive_en, 1);
+}
+
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_maturing_delay, (uint32_t)delay);
+}
+
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ /* Maturing delay is a two's complement 18 bit value, so we retrieve it as signed */
+ return nthw_field_get_signed(p->mp_fld_ts_sort_prg_maturing_delay);
+}
+
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_ts_at_eof, enable);
+}
+
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p)
+{
+ return nthw_field_get_updated(p->mp_fld_ts_sort_prg_ts_at_eof);
+}
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
index 4d495f5b96..9eaaeb550d 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
@@ -1050,6 +1050,18 @@ uint32_t nthw_field_get_val32(const nthw_field_t *p)
return val;
}
+int32_t nthw_field_get_signed(const nthw_field_t *p)
+{
+ uint32_t val;
+
+ nthw_field_get_val(p, &val, 1);
+
+ if (val & (1U << nthw_field_get_bit_pos_high(p))) /* check sign */
+ val = val | ~nthw_field_get_mask(p); /* sign extension */
+
+ return (int32_t)val; /* cast to signed value */
+}
+
uint32_t nthw_field_get_updated(const nthw_field_t *p)
{
uint32_t val;
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
index 7956f0689e..d4e7ab3edd 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
@@ -227,6 +227,7 @@ void nthw_field_get_val(const nthw_field_t *p, uint32_t *p_data, uint32_t len);
void nthw_field_set_val(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
void nthw_field_set_val_flush(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
uint32_t nthw_field_get_val32(const nthw_field_t *p);
+int32_t nthw_field_get_signed(const nthw_field_t *p);
uint32_t nthw_field_get_updated(const nthw_field_t *p);
void nthw_field_update_register(const nthw_field_t *p);
void nthw_field_flush_register(const nthw_field_t *p);
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index ddc144dc02..03122acaf5 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,6 +41,7 @@
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
#define MOD_RPL (0x6de535c3UL)
+#define MOD_RPF (0x8d30dcddUL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 8f196f885f..7067f4b1d0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -39,6 +39,7 @@
#include "nthw_fpga_reg_defs_qsl.h"
#include "nthw_fpga_reg_defs_rac.h"
#include "nthw_fpga_reg_defs_rmc.h"
+#include "nthw_fpga_reg_defs_rpf.h"
#include "nthw_fpga_reg_defs_rpl.h"
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
new file mode 100644
index 0000000000..72f450b85d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_RPF_
+#define _NTHW_FPGA_REG_DEFS_RPF_
+
+/* RPF */
+#define RPF_CONTROL (0x7a5bdb50UL)
+#define RPF_CONTROL_KEEP_ALIVE_EN (0x80be3ffcUL)
+#define RPF_CONTROL_PEN (0xb23137b8UL)
+#define RPF_CONTROL_RPP_EN (0xdb51f109UL)
+#define RPF_CONTROL_ST_TGL_EN (0x45a6ecfaUL)
+#define RPF_TS_SORT_PRG (0xff1d137eUL)
+#define RPF_TS_SORT_PRG_MATURING_DELAY (0x2a38e127UL)
+#define RPF_TS_SORT_PRG_TS_AT_EOF (0x9f27d433UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_RPF_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 56/73] net/ntnic: add statistics poll
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (54 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 55/73] net/ntnic: add rpf module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
` (16 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Mechanism which poll statistics module and update values with dma
module.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 343 ++++++++++++++++++
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 78 ++++
.../net/ntnic/nthw/core/include/nthw_rmc.h | 5 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 20 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 1 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 128 +++++++
drivers/net/ntnic/ntnic_ethdev.c | 143 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 +
9 files changed, 721 insertions(+)
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index f733fd5459..3afc5b7853 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -16,6 +16,27 @@
#define DEFAULT_MAX_BPS_SPEED 100e9
+/* Inline timestamp format s pcap 32:32 bits. Convert to nsecs */
+static inline uint64_t timestamp2ns(uint64_t ts)
+{
+ return ((ts) >> 32) * 1000000000 + ((ts) & 0xffffffff);
+}
+
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual);
+
+static int nt4ga_stat_collect(struct adapter_info_s *p_adapter_info, nt4ga_stat_t *p_nt4ga_stat)
+{
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ p_nt4ga_stat->last_timestamp = timestamp2ns(*p_nthw_stat->mp_timestamp);
+ nt4ga_stat_collect_cap_v1_stats(p_adapter_info, p_nt4ga_stat,
+ p_nt4ga_stat->p_stat_dma_virtual);
+
+ return 0;
+}
+
static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
{
const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
@@ -203,9 +224,331 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return 0;
}
+/* Called with stat mutex locked */
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual)
+{
+ (void)p_adapter_info;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL)
+ return -1;
+
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
+ const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
+ int c, h, p;
+
+ if (!p_nthw_stat || !p_nt4ga_stat)
+ return -1;
+
+ if (p_nthw_stat->mn_stat_layout_version < 6) {
+ NT_LOG(ERR, NTNIC, "HW STA module version not supported");
+ return -1;
+ }
+
+ /* RX ports */
+ for (c = 0; c < p_nthw_stat->m_nb_color_counters / 2; c++) {
+ p_nt4ga_stat->mp_stat_structs_color[c].color_packets += p_stat_dma_virtual[c * 2];
+ p_nt4ga_stat->mp_stat_structs_color[c].color_bytes +=
+ p_stat_dma_virtual[c * 2 + 1];
+ }
+
+ /* Move to Host buffer counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_color_counters;
+
+ for (h = 0; h < p_nthw_stat->m_nb_rx_host_buffers; h++) {
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_packets += p_stat_dma_virtual[h * 8];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_packets += p_stat_dma_virtual[h * 8 + 1];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_packets += p_stat_dma_virtual[h * 8 + 2];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_packets +=
+ p_stat_dma_virtual[h * 8 + 3];
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_bytes += p_stat_dma_virtual[h * 8 + 4];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_bytes += p_stat_dma_virtual[h * 8 + 5];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_bytes += p_stat_dma_virtual[h * 8 + 6];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_bytes +=
+ p_stat_dma_virtual[h * 8 + 7];
+ }
+
+ /* Move to Rx Port counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_rx_hb_counters;
+
+ /* RX ports */
+ for (p = 0; p < n_rx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 23];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].duplicate +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 24];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_ip_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 25];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_udp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 26];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_tcp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 27];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_giant_undersize +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 28];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_baby_giant +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 29];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_not_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 30];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 31];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 32];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 33];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 34];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 35];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 36];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 37];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 43];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 44];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 45];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 46];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 47]
+ : 0;
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 48];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 49];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 50];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 51];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 52];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 53];
+
+ /* Rx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41] +
+ (p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0);
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_rx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+ p_nt4ga_stat->a_port_rx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_rx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Move to Tx Port counters */
+ p_stat_dma_virtual += n_rx_ports * p_nthw_stat->m_nb_rx_port_counters;
+
+ for (p = 0; p < n_tx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 23];
+
+ /* Tx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_tx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+ p_nt4ga_stat->a_port_tx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_tx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Update and get port load counters */
+ for (p = 0; p < n_rx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ for (p = 0; p < n_tx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ return 0;
+}
+
static struct nt4ga_stat_ops ops = {
.nt4ga_stat_init = nt4ga_stat_init,
.nt4ga_stat_setup = nt4ga_stat_setup,
+ .nt4ga_stat_collect = nt4ga_stat_collect
};
void nt4ga_stat_ops_init(void)
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 1135e9a539..38e4d0ca35 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -16,6 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
+ rte_thread_t stat_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index ed24a892ec..0735dbc085 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -85,16 +85,87 @@ struct color_counters {
};
struct host_buffer_counters {
+ uint64_t flush_packets;
+ uint64_t drop_packets;
+ uint64_t fwd_packets;
+ uint64_t dbs_drop_packets;
+ uint64_t flush_bytes;
+ uint64_t drop_bytes;
+ uint64_t fwd_bytes;
+ uint64_t dbs_drop_bytes;
};
struct port_load_counters {
+ uint64_t rx_pps;
uint64_t rx_pps_max;
+ uint64_t tx_pps;
uint64_t tx_pps_max;
+ uint64_t rx_bps;
uint64_t rx_bps_max;
+ uint64_t tx_bps;
uint64_t tx_bps_max;
};
struct port_counters_v2 {
+ /* Rx/Tx common port counters */
+ uint64_t drop_events;
+ uint64_t pkts;
+ /* FPGA counters */
+ uint64_t octets;
+ uint64_t broadcast_pkts;
+ uint64_t multicast_pkts;
+ uint64_t unicast_pkts;
+ uint64_t pkts_alignment;
+ uint64_t pkts_code_violation;
+ uint64_t pkts_crc;
+ uint64_t undersize_pkts;
+ uint64_t oversize_pkts;
+ uint64_t fragments;
+ uint64_t jabbers_not_truncated;
+ uint64_t jabbers_truncated;
+ uint64_t pkts_64_octets;
+ uint64_t pkts_65_to_127_octets;
+ uint64_t pkts_128_to_255_octets;
+ uint64_t pkts_256_to_511_octets;
+ uint64_t pkts_512_to_1023_octets;
+ uint64_t pkts_1024_to_1518_octets;
+ uint64_t pkts_1519_to_2047_octets;
+ uint64_t pkts_2048_to_4095_octets;
+ uint64_t pkts_4096_to_8191_octets;
+ uint64_t pkts_8192_to_max_octets;
+ uint64_t mac_drop_events;
+ uint64_t pkts_lr;
+ /* Rx only port counters */
+ uint64_t duplicate;
+ uint64_t pkts_ip_chksum_error;
+ uint64_t pkts_udp_chksum_error;
+ uint64_t pkts_tcp_chksum_error;
+ uint64_t pkts_giant_undersize;
+ uint64_t pkts_baby_giant;
+ uint64_t pkts_not_isl_vlan_mpls;
+ uint64_t pkts_isl;
+ uint64_t pkts_vlan;
+ uint64_t pkts_isl_vlan;
+ uint64_t pkts_mpls;
+ uint64_t pkts_isl_mpls;
+ uint64_t pkts_vlan_mpls;
+ uint64_t pkts_isl_vlan_mpls;
+ uint64_t pkts_no_filter;
+ uint64_t pkts_dedup_drop;
+ uint64_t pkts_filter_drop;
+ uint64_t pkts_overflow;
+ uint64_t pkts_dbs_drop;
+ uint64_t octets_no_filter;
+ uint64_t octets_dedup_drop;
+ uint64_t octets_filter_drop;
+ uint64_t octets_overflow;
+ uint64_t octets_dbs_drop;
+ uint64_t ipft_first_hit;
+ uint64_t ipft_first_not_hit;
+ uint64_t ipft_mid_hit;
+ uint64_t ipft_mid_not_hit;
+ uint64_t ipft_last_hit;
+ uint64_t ipft_last_not_hit;
};
struct flm_counters_v1 {
@@ -147,6 +218,8 @@ struct nt4ga_stat_s {
uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_drops_total[NUM_ADAPTER_PORTS_MAX];
};
typedef struct nt4ga_stat_s nt4ga_stat_t;
@@ -159,4 +232,9 @@ int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
uint32_t *p_stat_dma_virtual);
int nthw_stat_trigger(nthw_stat_t *p);
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index b239752674..9c40804cd9 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -47,4 +47,9 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p);
+
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 748519aeb4..570a179fc8 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,26 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_sf_ram_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_descr_fifo_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p)
+{
+ return (p->mp_reg_dbg) ? nthw_field_get_updated(p->mp_fld_dbg_merge) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p)
+{
+ return (p->mp_reg_mac_if) ? nthw_field_get_updated(p->mp_fld_mac_if_err) : 0xffffffff;
+}
+
void nthw_rmc_block(nthw_rmc_t *p)
{
/* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d61044402d..aac3144cc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
+#include "ntlog.h"
#include "ntnic_mod_reg.h"
#include "flow_api.h"
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
index 6adcd2e090..078eec5e1f 100644
--- a/drivers/net/ntnic/nthw/stat/nthw_stat.c
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -368,3 +368,131 @@ int nthw_stat_trigger(nthw_stat_t *p)
return 0;
}
+
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 86876ecda6..f94340f489 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -4,6 +4,9 @@
*/
#include <stdint.h>
+#include <stdarg.h>
+
+#include <signal.h>
#include <rte_eal.h>
#include <rte_dev.h>
@@ -25,6 +28,7 @@
#include "nt_util.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
#define THREAD_JOIN(a) rte_thread_join(a, NULL)
#define THREAD_FUNC static uint32_t
@@ -67,6 +71,9 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
uint64_t rte_tsc_freq;
+static void (*previous_handler)(int sig);
+static rte_thread_t shutdown_tid;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -1407,6 +1414,7 @@ drv_deinit(struct drv_s *p_drv)
/* stop statistics threads */
p_drv->ntdrv.b_shutdown = true;
+ THREAD_JOIN(p_nt_drv->stat_thread);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
@@ -1628,6 +1636,87 @@ THREAD_FUNC adapter_flm_update_thread_fn(void *context)
return THREAD_RETURN;
}
+/*
+ * Adapter stat thread
+ */
+THREAD_FUNC adapter_stat_thread_fn(void *context)
+{
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
+
+ if (nt4ga_stat_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "Statistics module uninitialized");
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const char *const p_adapter_id_str = p_nt_drv->adapter_info.mp_adapter_id_str;
+ (void)p_adapter_id_str;
+
+ if (!p_nthw_stat)
+ return THREAD_RETURN;
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: begin", p_adapter_id_str);
+
+ assert(p_nthw_stat);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ nt_os_wait_usec(10 * 1000);
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ uint32_t loop = 0;
+
+ while ((!p_drv->ntdrv.b_shutdown) &&
+ (*p_nthw_stat->mp_timestamp == (uint64_t)-1)) {
+ nt_os_wait_usec(1 * 100);
+
+ if (rte_log_get_level(nt_log_ntnic) == RTE_LOG_DEBUG &&
+ (++loop & 0x3fff) == 0) {
+ if (p_nt4ga_stat->mp_nthw_rpf) {
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+
+ } else if (p_nt4ga_stat->mp_nthw_rmc) {
+ uint32_t sf_ram_of =
+ nthw_rmc_get_status_sf_ram_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+ uint32_t descr_fifo_of =
+ nthw_rmc_get_status_descr_fifo_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+
+ uint32_t dbg_merge =
+ nthw_rmc_get_dbg_merge(p_nt4ga_stat->mp_nthw_rmc);
+ uint32_t mac_if_err =
+ nthw_rmc_get_mac_if_err(p_nt4ga_stat->mp_nthw_rmc);
+
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+ NT_LOG(ERR, NTNIC, "SF RAM Overflow : %08x",
+ sf_ram_of);
+ NT_LOG(ERR, NTNIC, "Descr Fifo Overflow : %08x",
+ descr_fifo_of);
+ NT_LOG(ERR, NTNIC, "DBG Merge : %08x",
+ dbg_merge);
+ NT_LOG(ERR, NTNIC, "MAC If Errors : %08x",
+ mac_if_err);
+ }
+ }
+ }
+
+ /* Check then collect */
+ {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ }
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: end", p_adapter_id_str);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1885,6 +1974,16 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
+ pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
+ (void *)p_drv);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
@@ -2075,6 +2174,48 @@ nthw_pci_dev_deinit(struct rte_eth_dev *eth_dev __rte_unused)
return 0;
}
+static void signal_handler_func_int(int sig)
+{
+ if (sig != SIGINT) {
+ signal(sig, previous_handler);
+ raise(sig);
+ return;
+ }
+
+ kill_pmd = 1;
+}
+
+THREAD_FUNC shutdown_thread(void *arg __rte_unused)
+{
+ while (!kill_pmd)
+ nt_os_wait_usec(100 * 1000);
+
+ NT_LOG_DBGX(DBG, NTNIC, "Shutting down because of ctrl+C");
+
+ signal(SIGINT, previous_handler);
+ raise(SIGINT);
+
+ return THREAD_RETURN;
+}
+
+static int init_shutdown(void)
+{
+ NT_LOG(DBG, NTNIC, "Starting shutdown handler");
+ kill_pmd = 0;
+ previous_handler = signal(SIGINT, signal_handler_func_int);
+ THREAD_CREATE(&shutdown_tid, shutdown_thread, NULL);
+
+ /*
+ * 1 time calculation of 1 sec stat update rtc cycles to prevent stat poll
+ * flooding by OVS from multiple virtual port threads - no need to be precise
+ */
+ uint64_t now_rtc = rte_get_tsc_cycles();
+ nt_os_wait_usec(10 * 1000);
+ rte_tsc_freq = 100 * (rte_get_tsc_cycles() - now_rtc);
+
+ return 0;
+}
+
static int
nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
@@ -2117,6 +2258,8 @@ nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
ret = nthw_pci_dev_init(pci_dev);
+ init_shutdown();
+
NT_LOG_DBGX(DBG, NTNIC, "leave: ret=%d", ret);
return ret;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 30b9afb7d3..8b825d8c48 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -186,6 +186,8 @@ void port_init(void);
struct nt4ga_stat_ops {
int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_collect)(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat);
};
void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 57/73] net/ntnic: added flm stat interface
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (55 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 56/73] net/ntnic: add statistics poll Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 58/73] net/ntnic: add tsm module Serhii Iliushyk
` (15 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flm stat module interface was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 2 ++
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 ++
4 files changed, 16 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 4a1525f237..ed96f77bc0 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -233,4 +233,6 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_filter.h b/drivers/net/ntnic/include/flow_filter.h
index d204c0d882..01777f8c9f 100644
--- a/drivers/net/ntnic/include/flow_filter.h
+++ b/drivers/net/ntnic/include/flow_filter.h
@@ -11,5 +11,6 @@
int flow_filter_init(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device, int adapter_no);
int flow_filter_done(struct flow_nic_dev *dev);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
#endif /* __FLOW_FILTER_HPP__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index aac3144cc0..e953fc1a12 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,16 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ (void)ndev;
+ (void)data;
+ (void)size;
+
+ NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
+ return -1;
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
@@ -1062,6 +1072,7 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+ .flow_get_flm_stats = flow_get_flm_stats,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8b825d8c48..8703d478b6 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -336,6 +336,8 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
/*
* Other
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 58/73] net/ntnic: add tsm module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (56 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 59/73] net/ntnic: add STA module Serhii Iliushyk
` (14 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
tsm module which operate with timers
in the physical nic was added.
Necessary defines and implementation were added.
The Time Stamp Module controls every aspect of packet timestamping,
including time synchronization, time stamp format, PTP protocol, etc.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 ++++++
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +++++
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 28 +++
7 files changed, 301 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index ed5a201fd5..a6c4fec0be 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -41,6 +41,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
'nthw/core/nthw_gmf.c',
+ 'nthw/core/nthw_tsm.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_tsm.h b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
new file mode 100644
index 0000000000..0a3bcdcaf5
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_TSM_H__
+#define __NTHW_TSM_H__
+
+#include "stdint.h"
+
+#include "nthw_fpga_model.h"
+
+struct nthw_tsm {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_tsm;
+ int mn_instance;
+
+ nthw_field_t *mp_fld_config_ts_format;
+
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t0;
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t1;
+
+ nthw_field_t *mp_fld_timer_timer_t0_max_count;
+
+ nthw_field_t *mp_fld_timer_timer_t1_max_count;
+
+ nthw_register_t *mp_reg_ts_lo;
+ nthw_field_t *mp_fld_ts_lo;
+
+ nthw_register_t *mp_reg_ts_hi;
+ nthw_field_t *mp_fld_ts_hi;
+
+ nthw_register_t *mp_reg_time_lo;
+ nthw_field_t *mp_fld_time_lo;
+
+ nthw_register_t *mp_reg_time_hi;
+ nthw_field_t *mp_fld_time_hi;
+};
+
+typedef struct nthw_tsm nthw_tsm_t;
+typedef struct nthw_tsm nthw_tsm;
+
+nthw_tsm_t *nthw_tsm_new(void);
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts);
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time);
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val);
+
+#endif /* __NTHW_TSM_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_fpga.c b/drivers/net/ntnic/nthw/core/nthw_fpga.c
index 9448c29de1..ca69a9d5b1 100644
--- a/drivers/net/ntnic/nthw/core/nthw_fpga.c
+++ b/drivers/net/ntnic/nthw/core/nthw_fpga.c
@@ -13,6 +13,8 @@
#include "nthw_fpga_instances.h"
#include "nthw_fpga_mod_str_map.h"
+#include "nthw_tsm.h"
+
#include <arpa/inet.h>
int nthw_fpga_get_param_info(struct fpga_info_s *p_fpga_info, nthw_fpga_t *p_fpga)
@@ -179,6 +181,7 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
nthw_hif_t *p_nthw_hif = NULL;
nthw_pcie3_t *p_nthw_pcie3 = NULL;
nthw_rac_t *p_nthw_rac = NULL;
+ nthw_tsm_t *p_nthw_tsm = NULL;
mcu_info_t *p_mcu_info = &p_fpga_info->mcu_info;
uint64_t n_fpga_ident = 0;
@@ -331,6 +334,50 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
p_fpga_info->mp_nthw_hif = p_nthw_hif;
+ p_nthw_tsm = nthw_tsm_new();
+
+ if (p_nthw_tsm) {
+ nthw_tsm_init(p_nthw_tsm, p_fpga, 0);
+
+ nthw_tsm_set_config_ts_format(p_nthw_tsm, 1); /* 1 = TSM: TS format native */
+
+ /* Timer T0 - stat toggle timer */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t0_max_count(p_nthw_tsm, 50 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, true);
+
+ /* Timer T1 - keep alive timer */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t1_max_count(p_nthw_tsm, 100 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, true);
+ }
+
+ p_fpga_info->mp_nthw_tsm = p_nthw_tsm;
+
+ /* TSM sample triggering: test validation... */
+#if defined(DEBUG) && (1)
+ {
+ uint64_t n_time, n_ts;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (p_nthw_hif)
+ nthw_hif_trigger_sample_time(p_nthw_hif);
+
+ else if (p_nthw_pcie3)
+ nthw_pcie3_trigger_sample_time(p_nthw_pcie3);
+
+ nthw_tsm_get_time(p_nthw_tsm, &n_time);
+ nthw_tsm_get_ts(p_nthw_tsm, &n_ts);
+
+ NT_LOG(DBG, NTHW, "%s: TSM time: %016" PRIX64 " %016" PRIX64 "\n",
+ p_adapter_id_str, n_time, n_ts);
+
+ nt_os_wait_usec(1000);
+ }
+ }
+#endif
+
return res;
}
diff --git a/drivers/net/ntnic/nthw/core/nthw_tsm.c b/drivers/net/ntnic/nthw/core/nthw_tsm.c
new file mode 100644
index 0000000000..b88dcb9b0b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_tsm.c
@@ -0,0 +1,167 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_tsm.h"
+
+nthw_tsm_t *nthw_tsm_new(void)
+{
+ nthw_tsm_t *p = malloc(sizeof(nthw_tsm_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_tsm_t));
+
+ return p;
+}
+
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_TSM, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: TSM %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_tsm = mod;
+
+ {
+ nthw_register_t *p_reg;
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_CONFIG);
+ p->mp_fld_config_ts_format = nthw_register_get_field(p_reg, TSM_CONFIG_TS_FORMAT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_CTRL);
+ p->mp_fld_timer_ctrl_timer_en_t0 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T0);
+ p->mp_fld_timer_ctrl_timer_en_t1 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T1);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T0);
+ p->mp_fld_timer_timer_t0_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T0_MAX_COUNT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T1);
+ p->mp_fld_timer_timer_t1_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T1_MAX_COUNT);
+
+ p->mp_reg_time_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_LO);
+ p_reg = p->mp_reg_time_lo;
+ p->mp_fld_time_lo = nthw_register_get_field(p_reg, TSM_TIME_LO_NS);
+
+ p->mp_reg_time_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_HI);
+ p_reg = p->mp_reg_time_hi;
+ p->mp_fld_time_hi = nthw_register_get_field(p_reg, TSM_TIME_HI_SEC);
+
+ p->mp_reg_ts_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_LO);
+ p_reg = p->mp_reg_ts_lo;
+ p->mp_fld_ts_lo = nthw_register_get_field(p_reg, TSM_TS_LO_TIME);
+
+ p->mp_reg_ts_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_HI);
+ p_reg = p->mp_reg_ts_hi;
+ p->mp_fld_ts_hi = nthw_register_get_field(p_reg, TSM_TS_HI_TIME);
+ }
+ return 0;
+}
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts)
+{
+ uint32_t n_ts_lo, n_ts_hi;
+ uint64_t val;
+
+ if (!p_ts)
+ return -1;
+
+ n_ts_lo = nthw_field_get_updated(p->mp_fld_ts_lo);
+ n_ts_hi = nthw_field_get_updated(p->mp_fld_ts_hi);
+
+ val = ((((uint64_t)n_ts_hi) << 32UL) | n_ts_lo);
+
+ if (p_ts)
+ *p_ts = val;
+
+ return 0;
+}
+
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time)
+{
+ uint32_t n_time_lo, n_time_hi;
+ uint64_t val;
+
+ if (!p_time)
+ return -1;
+
+ n_time_lo = nthw_field_get_updated(p->mp_fld_time_lo);
+ n_time_hi = nthw_field_get_updated(p->mp_fld_time_hi);
+
+ val = ((((uint64_t)n_time_hi) << 32UL) | n_time_lo);
+
+ if (p_time)
+ *p_time = val;
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T0 - stat toggle timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t0_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t0_max_count,
+ n_timer_val); /* ns (50*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T1 - keep alive timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t1_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t1_max_count,
+ n_timer_val); /* ns (100*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val)
+{
+ nthw_field_update_register(p->mp_fld_config_ts_format);
+ /* 0x1: Native - 10ns units, start date: 1970-01-01. */
+ nthw_field_set_val_flush32(p->mp_fld_config_ts_format, n_val);
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 03122acaf5..e6ed9e714b 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -48,6 +48,7 @@
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_STA (0x76fae64dUL)
+#define MOD_TSM (0x35422a24UL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7067f4b1d0..4d299c6aa8 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -44,6 +44,7 @@
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
#include "nthw_fpga_reg_defs_sdc.h"
+#include "nthw_fpga_reg_defs_tsm.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
#include "nthw_fpga_reg_defs_sta.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
new file mode 100644
index 0000000000..a087850aa4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_TSM_
+#define _NTHW_FPGA_REG_DEFS_TSM_
+
+/* TSM */
+#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_TIMER_CTRL (0x648da051UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
+#define TSM_TIMER_T0 (0x417217a5UL)
+#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
+#define TSM_TIMER_T1 (0x36752733UL)
+#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HI (0x175acea1UL)
+#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
+#define TSM_TIME_LO (0x9a55ae90UL)
+#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TS_HI (0xccfe9e5eUL)
+#define TSM_TS_HI_TIME (0xc23fed30UL)
+#define TSM_TS_LO (0x41f1fe6fUL)
+#define TSM_TS_LO_TIME (0xe0292a3eUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 59/73] net/ntnic: add STA module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (57 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 58/73] net/ntnic: add tsm module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 60/73] net/ntnic: add TSM module Serhii Iliushyk
` (13 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with STA module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 92 ++++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 8 ++
3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index a3d9f94fc6..efdb084cd6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2486,6 +2486,95 @@ static nthw_fpga_register_init_s slc_registers[] = {
{ SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
};
+static nthw_fpga_field_init_s sta_byte_fields[] = {
+ { STA_BYTE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_cfg_fields[] = {
+ { STA_CFG_CNT_CLEAR, 1, 1, 0 },
+ { STA_CFG_DMA_ENA, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_cv_err_fields[] = {
+ { STA_CV_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_fcs_err_fields[] = {
+ { STA_FCS_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_lsb_fields[] = {
+ { STA_HOST_ADR_LSB_LSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_msb_fields[] = {
+ { STA_HOST_ADR_MSB_MSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_load_bin_fields[] = {
+ { STA_LOAD_BIN_BIN, 32, 0, 8388607 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_0_fields[] = {
+ { STA_LOAD_BPS_RX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_1_fields[] = {
+ { STA_LOAD_BPS_RX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_0_fields[] = {
+ { STA_LOAD_BPS_TX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_1_fields[] = {
+ { STA_LOAD_BPS_TX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_0_fields[] = {
+ { STA_LOAD_PPS_RX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_1_fields[] = {
+ { STA_LOAD_PPS_RX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_0_fields[] = {
+ { STA_LOAD_PPS_TX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_1_fields[] = {
+ { STA_LOAD_PPS_TX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_pckt_fields[] = {
+ { STA_PCKT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_status_fields[] = {
+ { STA_STATUS_STAT_TOGGLE_MISSED, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s sta_registers[] = {
+ { STA_BYTE, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_byte_fields },
+ { STA_CFG, 0, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, sta_cfg_fields },
+ { STA_CV_ERR, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_cv_err_fields },
+ { STA_FCS_ERR, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_fcs_err_fields },
+ { STA_HOST_ADR_LSB, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_lsb_fields },
+ { STA_HOST_ADR_MSB, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_msb_fields },
+ { STA_LOAD_BIN, 8, 32, NTHW_FPGA_REG_TYPE_WO, 8388607, 1, sta_load_bin_fields },
+ { STA_LOAD_BPS_RX_0, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_0_fields },
+ { STA_LOAD_BPS_RX_1, 13, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_1_fields },
+ { STA_LOAD_BPS_TX_0, 15, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_0_fields },
+ { STA_LOAD_BPS_TX_1, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_1_fields },
+ { STA_LOAD_PPS_RX_0, 10, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_0_fields },
+ { STA_LOAD_PPS_RX_1, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_1_fields },
+ { STA_LOAD_PPS_TX_0, 14, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_0_fields },
+ { STA_LOAD_PPS_TX_1, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_1_fields },
+ { STA_PCKT, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_pckt_fields },
+ { STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2537,6 +2626,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
+ { MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2695,5 +2785,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index 150b9dd976..a2ab266931 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -19,5 +19,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RAC, "RAC" },
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
+ { MOD_STA, "STA" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
index 640ffcbc52..0cd183fcaa 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -7,11 +7,17 @@
#define _NTHW_FPGA_REG_DEFS_STA_
/* STA */
+#define STA_BYTE (0xa08364d4UL)
+#define STA_BYTE_CNT (0x3119e6bcUL)
#define STA_CFG (0xcecaf9f4UL)
#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
#define STA_CFG_CNT_FRZ (0x8c27a596UL)
#define STA_CFG_DMA_ENA (0x940dbacUL)
#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_CV_ERR (0x7db7db5dUL)
+#define STA_CV_ERR_CNT (0x2c02fbbeUL)
+#define STA_FCS_ERR (0xa0de1647UL)
+#define STA_FCS_ERR_CNT (0xc68c37d1UL)
#define STA_HOST_ADR_LSB (0xde569336UL)
#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
#define STA_HOST_ADR_MSB (0xdf94f901UL)
@@ -34,6 +40,8 @@
#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_PCKT (0xecc8f30aUL)
+#define STA_PCKT_CNT (0x63291d16UL)
#define STA_STATUS (0x91c5c51cUL)
#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 60/73] net/ntnic: add TSM module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (58 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 59/73] net/ntnic: add STA module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 61/73] net/ntnic: add xstats Serhii Iliushyk
` (12 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with tsm module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../supported/nthw_fpga_9563_055_049_0000.c | 394 +++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 177 ++++++++
4 files changed, 572 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e5d5abd0ed..64351bcdc7 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,6 +12,7 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
+Basic stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efdb084cd6..620968ceb6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2575,6 +2575,397 @@ static nthw_fpga_register_init_s sta_registers[] = {
{ STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
};
+static nthw_fpga_field_init_s tsm_con0_config_fields[] = {
+ { TSM_CON0_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON0_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON0_CONFIG_PORT, 3, 0, 0 }, { TSM_CON0_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON0_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_interface_fields[] = {
+ { TSM_CON0_INTERFACE_EX_TERM, 2, 0, 3 }, { TSM_CON0_INTERFACE_IN_REF_PWM, 8, 12, 128 },
+ { TSM_CON0_INTERFACE_PWM_ENA, 1, 2, 0 }, { TSM_CON0_INTERFACE_RESERVED, 1, 3, 0 },
+ { TSM_CON0_INTERFACE_VTERM_PWM, 8, 4, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_hi_fields[] = {
+ { TSM_CON0_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_lo_fields[] = {
+ { TSM_CON0_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_config_fields[] = {
+ { TSM_CON1_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON1_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON1_CONFIG_PORT, 3, 0, 0 }, { TSM_CON1_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON1_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_hi_fields[] = {
+ { TSM_CON1_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_lo_fields[] = {
+ { TSM_CON1_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_config_fields[] = {
+ { TSM_CON2_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON2_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON2_CONFIG_PORT, 3, 0, 0 }, { TSM_CON2_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON2_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_hi_fields[] = {
+ { TSM_CON2_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_lo_fields[] = {
+ { TSM_CON2_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_config_fields[] = {
+ { TSM_CON3_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON3_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON3_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_hi_fields[] = {
+ { TSM_CON3_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_lo_fields[] = {
+ { TSM_CON3_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_config_fields[] = {
+ { TSM_CON4_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON4_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON4_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_hi_fields[] = {
+ { TSM_CON4_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_lo_fields[] = {
+ { TSM_CON4_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_config_fields[] = {
+ { TSM_CON5_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON5_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON5_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_hi_fields[] = {
+ { TSM_CON5_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_lo_fields[] = {
+ { TSM_CON5_SAMPLE_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_config_fields[] = {
+ { TSM_CON6_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON6_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON6_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_hi_fields[] = {
+ { TSM_CON6_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_lo_fields[] = {
+ { TSM_CON6_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_hi_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_lo_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_config_fields[] = {
+ { TSM_CONFIG_NTTS_SRC, 2, 5, 0 }, { TSM_CONFIG_NTTS_SYNC, 1, 4, 0 },
+ { TSM_CONFIG_TIMESET_EDGE, 2, 8, 1 }, { TSM_CONFIG_TIMESET_SRC, 3, 10, 0 },
+ { TSM_CONFIG_TIMESET_UP, 1, 7, 0 }, { TSM_CONFIG_TS_FORMAT, 4, 0, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_int_config_fields[] = {
+ { TSM_INT_CONFIG_AUTO_DISABLE, 1, 0, 0 },
+ { TSM_INT_CONFIG_MASK, 19, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_int_stat_fields[] = {
+ { TSM_INT_STAT_CAUSE, 19, 1, 0 },
+ { TSM_INT_STAT_ENABLE, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_led_fields[] = {
+ { TSM_LED_LED0_BG_COLOR, 2, 3, 0 }, { TSM_LED_LED0_COLOR, 2, 1, 0 },
+ { TSM_LED_LED0_MODE, 1, 0, 0 }, { TSM_LED_LED0_SRC, 4, 5, 0 },
+ { TSM_LED_LED1_BG_COLOR, 2, 12, 0 }, { TSM_LED_LED1_COLOR, 2, 10, 0 },
+ { TSM_LED_LED1_MODE, 1, 9, 0 }, { TSM_LED_LED1_SRC, 4, 14, 1 },
+ { TSM_LED_LED2_BG_COLOR, 2, 21, 0 }, { TSM_LED_LED2_COLOR, 2, 19, 0 },
+ { TSM_LED_LED2_MODE, 1, 18, 0 }, { TSM_LED_LED2_SRC, 4, 23, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_config_fields[] = {
+ { TSM_NTTS_CONFIG_AUTO_HARDSET, 1, 5, 1 },
+ { TSM_NTTS_CONFIG_EXT_CLK_ADJ, 1, 6, 0 },
+ { TSM_NTTS_CONFIG_HIGH_SAMPLE, 1, 4, 0 },
+ { TSM_NTTS_CONFIG_TS_SRC_FORMAT, 4, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ext_stat_fields[] = {
+ { TSM_NTTS_EXT_STAT_MASTER_ID, 8, 16, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_REV, 8, 24, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_STAT, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_hi_fields[] = {
+ { TSM_NTTS_LIMIT_HI_SEC, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_lo_fields[] = {
+ { TSM_NTTS_LIMIT_LO_NS, 32, 0, 100000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_offset_fields[] = {
+ { TSM_NTTS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_hi_fields[] = {
+ { TSM_NTTS_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_lo_fields[] = {
+ { TSM_NTTS_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_stat_fields[] = {
+ { TSM_NTTS_STAT_NTTS_VALID, 1, 0, 0 },
+ { TSM_NTTS_STAT_SIGNAL_LOST, 8, 1, 0 },
+ { TSM_NTTS_STAT_SYNC_LOST, 8, 9, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_hi_fields[] = {
+ { TSM_NTTS_TS_T0_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_lo_fields[] = {
+ { TSM_NTTS_TS_T0_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_offset_fields[] = {
+ { TSM_NTTS_TS_T0_OFFSET_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_ctrl_fields[] = {
+ { TSM_PB_CTRL_INSTMEM_WR, 1, 1, 0 },
+ { TSM_PB_CTRL_RST, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_instmem_fields[] = {
+ { TSM_PB_INSTMEM_MEM_ADDR, 14, 0, 0 },
+ { TSM_PB_INSTMEM_MEM_DATA, 18, 14, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_i_fields[] = {
+ { TSM_PI_CTRL_I_VAL, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_ki_fields[] = {
+ { TSM_PI_CTRL_KI_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_kp_fields[] = {
+ { TSM_PI_CTRL_KP_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_shl_fields[] = {
+ { TSM_PI_CTRL_SHL_VAL, 4, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_stat_fields[] = {
+ { TSM_STAT_HARD_SYNC, 8, 8, 0 }, { TSM_STAT_LINK_CON0, 1, 0, 0 },
+ { TSM_STAT_LINK_CON1, 1, 1, 0 }, { TSM_STAT_LINK_CON2, 1, 2, 0 },
+ { TSM_STAT_LINK_CON3, 1, 3, 0 }, { TSM_STAT_LINK_CON4, 1, 4, 0 },
+ { TSM_STAT_LINK_CON5, 1, 5, 0 }, { TSM_STAT_NTTS_INSYNC, 1, 6, 0 },
+ { TSM_STAT_PTP_MI_PRESENT, 1, 7, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_ctrl_fields[] = {
+ { TSM_TIMER_CTRL_TIMER_EN_T0, 1, 0, 0 },
+ { TSM_TIMER_CTRL_TIMER_EN_T1, 1, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t0_fields[] = {
+ { TSM_TIMER_T0_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t1_fields[] = {
+ { TSM_TIMER_T1_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_hi_fields[] = {
+ { TSM_TIME_HARDSET_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_lo_fields[] = {
+ { TSM_TIME_HARDSET_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hi_fields[] = {
+ { TSM_TIME_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_lo_fields[] = {
+ { TSM_TIME_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_rate_adj_fields[] = {
+ { TSM_TIME_RATE_ADJ_FRACTION, 29, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_hi_fields[] = {
+ { TSM_TS_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_lo_fields[] = {
+ { TSM_TS_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_offset_fields[] = {
+ { TSM_TS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_fields[] = {
+ { TSM_TS_STAT_OVERRUN, 1, 16, 0 },
+ { TSM_TS_STAT_SAMPLES, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_hi_offset_fields[] = {
+ { TSM_TS_STAT_HI_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_lo_offset_fields[] = {
+ { TSM_TS_STAT_LO_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_hi_fields[] = {
+ { TSM_TS_STAT_TAR_HI_SEC, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_lo_fields[] = {
+ { TSM_TS_STAT_TAR_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x_fields[] = {
+ { TSM_TS_STAT_X_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_hi_fields[] = {
+ { TSM_TS_STAT_X2_HI_NS, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_lo_fields[] = {
+ { TSM_TS_STAT_X2_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_utc_offset_fields[] = {
+ { TSM_UTC_OFFSET_SEC, 8, 0, 0 },
+};
+
+static nthw_fpga_register_init_s tsm_registers[] = {
+ { TSM_CON0_CONFIG, 24, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con0_config_fields },
+ {
+ TSM_CON0_INTERFACE, 25, 20, NTHW_FPGA_REG_TYPE_RW, 524291, 5,
+ tsm_con0_interface_fields
+ },
+ { TSM_CON0_SAMPLE_HI, 27, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_hi_fields },
+ { TSM_CON0_SAMPLE_LO, 26, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_lo_fields },
+ { TSM_CON1_CONFIG, 28, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con1_config_fields },
+ { TSM_CON1_SAMPLE_HI, 30, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_hi_fields },
+ { TSM_CON1_SAMPLE_LO, 29, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_lo_fields },
+ { TSM_CON2_CONFIG, 31, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con2_config_fields },
+ { TSM_CON2_SAMPLE_HI, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_hi_fields },
+ { TSM_CON2_SAMPLE_LO, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_lo_fields },
+ { TSM_CON3_CONFIG, 34, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con3_config_fields },
+ { TSM_CON3_SAMPLE_HI, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_hi_fields },
+ { TSM_CON3_SAMPLE_LO, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_lo_fields },
+ { TSM_CON4_CONFIG, 37, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con4_config_fields },
+ { TSM_CON4_SAMPLE_HI, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_hi_fields },
+ { TSM_CON4_SAMPLE_LO, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_lo_fields },
+ { TSM_CON5_CONFIG, 40, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con5_config_fields },
+ { TSM_CON5_SAMPLE_HI, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_hi_fields },
+ { TSM_CON5_SAMPLE_LO, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_lo_fields },
+ { TSM_CON6_CONFIG, 43, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con6_config_fields },
+ { TSM_CON6_SAMPLE_HI, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_hi_fields },
+ { TSM_CON6_SAMPLE_LO, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_lo_fields },
+ {
+ TSM_CON7_HOST_SAMPLE_HI, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_hi_fields
+ },
+ {
+ TSM_CON7_HOST_SAMPLE_LO, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_lo_fields
+ },
+ { TSM_CONFIG, 0, 13, NTHW_FPGA_REG_TYPE_RW, 257, 6, tsm_config_fields },
+ { TSM_INT_CONFIG, 2, 20, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_int_config_fields },
+ { TSM_INT_STAT, 3, 20, NTHW_FPGA_REG_TYPE_MIXED, 0, 2, tsm_int_stat_fields },
+ { TSM_LED, 4, 27, NTHW_FPGA_REG_TYPE_RW, 16793600, 12, tsm_led_fields },
+ { TSM_NTTS_CONFIG, 13, 7, NTHW_FPGA_REG_TYPE_RW, 32, 4, tsm_ntts_config_fields },
+ { TSM_NTTS_EXT_STAT, 15, 32, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, tsm_ntts_ext_stat_fields },
+ { TSM_NTTS_LIMIT_HI, 23, 16, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_limit_hi_fields },
+ { TSM_NTTS_LIMIT_LO, 22, 32, NTHW_FPGA_REG_TYPE_RW, 100000, 1, tsm_ntts_limit_lo_fields },
+ { TSM_NTTS_OFFSET, 21, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_offset_fields },
+ { TSM_NTTS_SAMPLE_HI, 19, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_hi_fields },
+ { TSM_NTTS_SAMPLE_LO, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_lo_fields },
+ { TSM_NTTS_STAT, 14, 17, NTHW_FPGA_REG_TYPE_RO, 0, 3, tsm_ntts_stat_fields },
+ { TSM_NTTS_TS_T0_HI, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_hi_fields },
+ { TSM_NTTS_TS_T0_LO, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_lo_fields },
+ {
+ TSM_NTTS_TS_T0_OFFSET, 20, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ntts_ts_t0_offset_fields
+ },
+ { TSM_PB_CTRL, 63, 2, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_ctrl_fields },
+ { TSM_PB_INSTMEM, 64, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_instmem_fields },
+ { TSM_PI_CTRL_I, 54, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_i_fields },
+ { TSM_PI_CTRL_KI, 52, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_ki_fields },
+ { TSM_PI_CTRL_KP, 51, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_kp_fields },
+ { TSM_PI_CTRL_SHL, 53, 4, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_shl_fields },
+ { TSM_STAT, 1, 16, NTHW_FPGA_REG_TYPE_RO, 0, 9, tsm_stat_fields },
+ { TSM_TIMER_CTRL, 48, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_timer_ctrl_fields },
+ { TSM_TIMER_T0, 49, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t0_fields },
+ { TSM_TIMER_T1, 50, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t1_fields },
+ { TSM_TIME_HARDSET_HI, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_hi_fields },
+ { TSM_TIME_HARDSET_LO, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_lo_fields },
+ { TSM_TIME_HI, 9, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_hi_fields },
+ { TSM_TIME_LO, 8, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_lo_fields },
+ { TSM_TIME_RATE_ADJ, 10, 29, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_rate_adj_fields },
+ { TSM_TS_HI, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_hi_fields },
+ { TSM_TS_LO, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_lo_fields },
+ { TSM_TS_OFFSET, 7, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ts_offset_fields },
+ { TSM_TS_STAT, 55, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, tsm_ts_stat_fields },
+ {
+ TSM_TS_STAT_HI_OFFSET, 62, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_hi_offset_fields
+ },
+ {
+ TSM_TS_STAT_LO_OFFSET, 61, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_lo_offset_fields
+ },
+ { TSM_TS_STAT_TAR_HI, 57, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_hi_fields },
+ { TSM_TS_STAT_TAR_LO, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_lo_fields },
+ { TSM_TS_STAT_X, 58, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x_fields },
+ { TSM_TS_STAT_X2_HI, 60, 16, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_hi_fields },
+ { TSM_TS_STAT_X2_LO, 59, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_lo_fields },
+ { TSM_UTC_OFFSET, 65, 8, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_utc_offset_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2627,6 +3018,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
{ MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
+ { MOD_TSM, 0, MOD_TSM, 0, 8, NTHW_FPGA_BUS_TYPE_RAB2, 1024, 66, tsm_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2785,5 +3177,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 37, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index a2ab266931..e8ed7faf0d 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -20,5 +20,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
{ MOD_STA, "STA" },
+ { MOD_TSM, "TSM" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
index a087850aa4..cdb733ee17 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -7,8 +7,158 @@
#define _NTHW_FPGA_REG_DEFS_TSM_
/* TSM */
+#define TSM_CON0_CONFIG (0xf893d371UL)
+#define TSM_CON0_CONFIG_BLIND (0x59ccfcbUL)
+#define TSM_CON0_CONFIG_DC_SRC (0x1879812bUL)
+#define TSM_CON0_CONFIG_PORT (0x3ff0bb08UL)
+#define TSM_CON0_CONFIG_PPSIN_2_5V (0xb8e78227UL)
+#define TSM_CON0_CONFIG_SAMPLE_EDGE (0x4a4022ebUL)
+#define TSM_CON0_INTERFACE (0x76e93b59UL)
+#define TSM_CON0_INTERFACE_EX_TERM (0xd079b416UL)
+#define TSM_CON0_INTERFACE_IN_REF_PWM (0x16f73c33UL)
+#define TSM_CON0_INTERFACE_PWM_ENA (0x3629e73fUL)
+#define TSM_CON0_INTERFACE_RESERVED (0xf9c5066UL)
+#define TSM_CON0_INTERFACE_VTERM_PWM (0x6d2b1e23UL)
+#define TSM_CON0_SAMPLE_HI (0x6e536b8UL)
+#define TSM_CON0_SAMPLE_HI_SEC (0x5fc26159UL)
+#define TSM_CON0_SAMPLE_LO (0x8bea5689UL)
+#define TSM_CON0_SAMPLE_LO_NS (0x13d0010dUL)
+#define TSM_CON1_CONFIG (0x3439d3efUL)
+#define TSM_CON1_CONFIG_BLIND (0x98932ebdUL)
+#define TSM_CON1_CONFIG_DC_SRC (0xa1825ac3UL)
+#define TSM_CON1_CONFIG_PORT (0xe266628dUL)
+#define TSM_CON1_CONFIG_PPSIN_2_5V (0x6f05027fUL)
+#define TSM_CON1_CONFIG_SAMPLE_EDGE (0x2f2719adUL)
+#define TSM_CON1_SAMPLE_HI (0xc76be978UL)
+#define TSM_CON1_SAMPLE_HI_SEC (0xe639bab1UL)
+#define TSM_CON1_SAMPLE_LO (0x4a648949UL)
+#define TSM_CON1_SAMPLE_LO_NS (0x8edfe07bUL)
+#define TSM_CON2_CONFIG (0xbab6d40cUL)
+#define TSM_CON2_CONFIG_BLIND (0xe4f20b66UL)
+#define TSM_CON2_CONFIG_DC_SRC (0xb0ff30baUL)
+#define TSM_CON2_CONFIG_PORT (0x5fac0e43UL)
+#define TSM_CON2_CONFIG_PPSIN_2_5V (0xcc5384d6UL)
+#define TSM_CON2_CONFIG_SAMPLE_EDGE (0x808e5467UL)
+#define TSM_CON2_SAMPLE_HI (0x5e898f79UL)
+#define TSM_CON2_SAMPLE_HI_SEC (0xf744d0c8UL)
+#define TSM_CON2_SAMPLE_LO (0xd386ef48UL)
+#define TSM_CON2_SAMPLE_LO_NS (0xf2bec5a0UL)
+#define TSM_CON3_CONFIG (0x761cd492UL)
+#define TSM_CON3_CONFIG_BLIND (0x79fdea10UL)
+#define TSM_CON3_CONFIG_PORT (0x823ad7c6UL)
+#define TSM_CON3_CONFIG_SAMPLE_EDGE (0xe5e96f21UL)
+#define TSM_CON3_SAMPLE_HI (0x9f0750b9UL)
+#define TSM_CON3_SAMPLE_HI_SEC (0x4ebf0b20UL)
+#define TSM_CON3_SAMPLE_LO (0x12083088UL)
+#define TSM_CON3_SAMPLE_LO_NS (0x6fb124d6UL)
+#define TSM_CON4_CONFIG (0x7cd9dd8bUL)
+#define TSM_CON4_CONFIG_BLIND (0x1c3040d0UL)
+#define TSM_CON4_CONFIG_PORT (0xff49d19eUL)
+#define TSM_CON4_CONFIG_SAMPLE_EDGE (0x4adc9b2UL)
+#define TSM_CON4_SAMPLE_HI (0xb63c453aUL)
+#define TSM_CON4_SAMPLE_HI_SEC (0xd5be043aUL)
+#define TSM_CON4_SAMPLE_LO (0x3b33250bUL)
+#define TSM_CON4_SAMPLE_LO_NS (0xa7c8e16UL)
+#define TSM_CON5_CONFIG (0xb073dd15UL)
+#define TSM_CON5_CONFIG_BLIND (0x813fa1a6UL)
+#define TSM_CON5_CONFIG_PORT (0x22df081bUL)
+#define TSM_CON5_CONFIG_SAMPLE_EDGE (0x61caf2f4UL)
+#define TSM_CON5_SAMPLE_HI (0x77b29afaUL)
+#define TSM_CON5_SAMPLE_HI_SEC (0x6c45dfd2UL)
+#define TSM_CON5_SAMPLE_LO (0xfabdfacbUL)
+#define TSM_CON5_SAMPLE_LO_TIME (0x945d87e8UL)
+#define TSM_CON6_CONFIG (0x3efcdaf6UL)
+#define TSM_CON6_CONFIG_BLIND (0xfd5e847dUL)
+#define TSM_CON6_CONFIG_PORT (0x9f1564d5UL)
+#define TSM_CON6_CONFIG_SAMPLE_EDGE (0xce63bf3eUL)
+#define TSM_CON6_SAMPLE_HI (0xee50fcfbUL)
+#define TSM_CON6_SAMPLE_HI_SEC (0x7d38b5abUL)
+#define TSM_CON6_SAMPLE_LO (0x635f9ccaUL)
+#define TSM_CON6_SAMPLE_LO_NS (0xeb124abbUL)
+#define TSM_CON7_HOST_SAMPLE_HI (0xdcd90e52UL)
+#define TSM_CON7_HOST_SAMPLE_HI_SEC (0xd98d3618UL)
+#define TSM_CON7_HOST_SAMPLE_LO (0x51d66e63UL)
+#define TSM_CON7_HOST_SAMPLE_LO_NS (0x8f5594ddUL)
#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_NTTS_SRC (0x1b60227bUL)
+#define TSM_CONFIG_NTTS_SYNC (0x43e0a69dUL)
+#define TSM_CONFIG_TIMESET_EDGE (0x8c381127UL)
+#define TSM_CONFIG_TIMESET_SRC (0xe7590a31UL)
+#define TSM_CONFIG_TIMESET_UP (0x561980c1UL)
#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_INT_CONFIG (0x9a0d52dUL)
+#define TSM_INT_CONFIG_AUTO_DISABLE (0x9581470UL)
+#define TSM_INT_CONFIG_MASK (0xf00cd3d7UL)
+#define TSM_INT_STAT (0xa4611a70UL)
+#define TSM_INT_STAT_CAUSE (0x315168cfUL)
+#define TSM_INT_STAT_ENABLE (0x980a12d1UL)
+#define TSM_LED (0x6ae05f87UL)
+#define TSM_LED_LED0_BG_COLOR (0x897cf9eeUL)
+#define TSM_LED_LED0_COLOR (0x6d7ada39UL)
+#define TSM_LED_LED0_MODE (0x6087b644UL)
+#define TSM_LED_LED0_SRC (0x4fe29639UL)
+#define TSM_LED_LED1_BG_COLOR (0x66be92d0UL)
+#define TSM_LED_LED1_COLOR (0xcb0dd18dUL)
+#define TSM_LED_LED1_MODE (0xabdb65e1UL)
+#define TSM_LED_LED1_SRC (0x7282bf89UL)
+#define TSM_LED_LED2_BG_COLOR (0x8d8929d3UL)
+#define TSM_LED_LED2_COLOR (0xfae5cb10UL)
+#define TSM_LED_LED2_MODE (0x2d4f174fUL)
+#define TSM_LED_LED2_SRC (0x3522c559UL)
+#define TSM_NTTS_CONFIG (0x8bc38bdeUL)
+#define TSM_NTTS_CONFIG_AUTO_HARDSET (0xd75be25dUL)
+#define TSM_NTTS_CONFIG_EXT_CLK_ADJ (0x700425b6UL)
+#define TSM_NTTS_CONFIG_HIGH_SAMPLE (0x37135b7eUL)
+#define TSM_NTTS_CONFIG_TS_SRC_FORMAT (0x6e6e707UL)
+#define TSM_NTTS_EXT_STAT (0x2b0315b7UL)
+#define TSM_NTTS_EXT_STAT_MASTER_ID (0xf263315eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_REV (0xd543795eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_STAT (0x92d96f5eUL)
+#define TSM_NTTS_LIMIT_HI (0x1ddaa85fUL)
+#define TSM_NTTS_LIMIT_HI_SEC (0x315c6ef2UL)
+#define TSM_NTTS_LIMIT_LO (0x90d5c86eUL)
+#define TSM_NTTS_LIMIT_LO_NS (0xe6d94d9aUL)
+#define TSM_NTTS_OFFSET (0x6436e72UL)
+#define TSM_NTTS_OFFSET_NS (0x12d43a06UL)
+#define TSM_NTTS_SAMPLE_HI (0xcdc8aa3eUL)
+#define TSM_NTTS_SAMPLE_HI_SEC (0x4f6588fdUL)
+#define TSM_NTTS_SAMPLE_LO (0x40c7ca0fUL)
+#define TSM_NTTS_SAMPLE_LO_NS (0x6e43ff97UL)
+#define TSM_NTTS_STAT (0x6502b820UL)
+#define TSM_NTTS_STAT_NTTS_VALID (0x3e184471UL)
+#define TSM_NTTS_STAT_SIGNAL_LOST (0x178bedfdUL)
+#define TSM_NTTS_STAT_SYNC_LOST (0xe4cd53dfUL)
+#define TSM_NTTS_TS_T0_HI (0x1300d1b6UL)
+#define TSM_NTTS_TS_T0_HI_TIME (0xa016ae4fUL)
+#define TSM_NTTS_TS_T0_LO (0x9e0fb187UL)
+#define TSM_NTTS_TS_T0_LO_TIME (0x82006941UL)
+#define TSM_NTTS_TS_T0_OFFSET (0xbf70ce4fUL)
+#define TSM_NTTS_TS_T0_OFFSET_COUNT (0x35dd4398UL)
+#define TSM_PB_CTRL (0x7a8b60faUL)
+#define TSM_PB_CTRL_INSTMEM_WR (0xf96e2cbcUL)
+#define TSM_PB_CTRL_RESET (0xa38ade8bUL)
+#define TSM_PB_CTRL_RST (0x3aaa82f4UL)
+#define TSM_PB_INSTMEM (0xb54aeecUL)
+#define TSM_PB_INSTMEM_MEM_ADDR (0x9ac79b6eUL)
+#define TSM_PB_INSTMEM_MEM_DATA (0x65aefa38UL)
+#define TSM_PI_CTRL_I (0x8d71a4e2UL)
+#define TSM_PI_CTRL_I_VAL (0x98baedc9UL)
+#define TSM_PI_CTRL_KI (0xa1bd86cbUL)
+#define TSM_PI_CTRL_KI_GAIN (0x53faa916UL)
+#define TSM_PI_CTRL_KP (0xc5d62e0bUL)
+#define TSM_PI_CTRL_KP_GAIN (0x7723fa45UL)
+#define TSM_PI_CTRL_SHL (0xaa518701UL)
+#define TSM_PI_CTRL_SHL_VAL (0x56f56a6fUL)
+#define TSM_STAT (0xa55bf677UL)
+#define TSM_STAT_HARD_SYNC (0x7fff20fdUL)
+#define TSM_STAT_LINK_CON0 (0x216086f0UL)
+#define TSM_STAT_LINK_CON1 (0x5667b666UL)
+#define TSM_STAT_LINK_CON2 (0xcf6ee7dcUL)
+#define TSM_STAT_LINK_CON3 (0xb869d74aUL)
+#define TSM_STAT_LINK_CON4 (0x260d42e9UL)
+#define TSM_STAT_LINK_CON5 (0x510a727fUL)
+#define TSM_STAT_NTTS_INSYNC (0xb593a245UL)
+#define TSM_STAT_PTP_MI_PRESENT (0x43131eb0UL)
#define TSM_TIMER_CTRL (0x648da051UL)
#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
@@ -16,13 +166,40 @@
#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
#define TSM_TIMER_T1 (0x36752733UL)
#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HARDSET_HI (0xf28bdb46UL)
+#define TSM_TIME_HARDSET_HI_TIME (0x2d9a28baUL)
+#define TSM_TIME_HARDSET_LO (0x7f84bb77UL)
+#define TSM_TIME_HARDSET_LO_TIME (0xf8cefb4UL)
#define TSM_TIME_HI (0x175acea1UL)
#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
#define TSM_TIME_LO (0x9a55ae90UL)
#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TIME_RATE_ADJ (0xb1cc4bb1UL)
+#define TSM_TIME_RATE_ADJ_FRACTION (0xb7ab96UL)
#define TSM_TS_HI (0xccfe9e5eUL)
#define TSM_TS_HI_TIME (0xc23fed30UL)
#define TSM_TS_LO (0x41f1fe6fUL)
#define TSM_TS_LO_TIME (0xe0292a3eUL)
+#define TSM_TS_OFFSET (0x4b2e6e13UL)
+#define TSM_TS_OFFSET_NS (0x68c286b9UL)
+#define TSM_TS_STAT (0x64d41b8cUL)
+#define TSM_TS_STAT_OVERRUN (0xad9db92aUL)
+#define TSM_TS_STAT_SAMPLES (0xb6350e0bUL)
+#define TSM_TS_STAT_HI_OFFSET (0x1aa2ddf2UL)
+#define TSM_TS_STAT_HI_OFFSET_NS (0xeb040e0fUL)
+#define TSM_TS_STAT_LO_OFFSET (0x81218579UL)
+#define TSM_TS_STAT_LO_OFFSET_NS (0xb7ff33UL)
+#define TSM_TS_STAT_TAR_HI (0x65af24b6UL)
+#define TSM_TS_STAT_TAR_HI_SEC (0x7e92f619UL)
+#define TSM_TS_STAT_TAR_LO (0xe8a04487UL)
+#define TSM_TS_STAT_TAR_LO_NS (0xf7b3f439UL)
+#define TSM_TS_STAT_X (0x419f0ddUL)
+#define TSM_TS_STAT_X_NS (0xa48c3f27UL)
+#define TSM_TS_STAT_X2_HI (0xd6b1c517UL)
+#define TSM_TS_STAT_X2_HI_NS (0x4288c50fUL)
+#define TSM_TS_STAT_X2_LO (0x5bbea526UL)
+#define TSM_TS_STAT_X2_LO_NS (0x92633c13UL)
+#define TSM_UTC_OFFSET (0xf622a13aUL)
+#define TSM_UTC_OFFSET_SEC (0xd9c80209UL)
#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 61/73] net/ntnic: add xstats
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (59 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 60/73] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 62/73] net/ntnic: added flow statistics Serhii Iliushyk
` (11 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Extended statistics implementation and
initialization were added.
eth_dev_ops api was extended with new xstats apis.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 36 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 112 +++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +
drivers/net/ntnic/ntnic_mod_reg.h | 28 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 ++++++++++++++++++
7 files changed, 1022 insertions(+)
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 64351bcdc7..947c7ba3a1 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -13,6 +13,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
Basic stats = Y
+Extended stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 0735dbc085..4d4affa3cf 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -169,6 +169,39 @@ struct port_counters_v2 {
};
struct flm_counters_v1 {
+ /* FLM 0.17 */
+ uint64_t current;
+ uint64_t learn_done;
+ uint64_t learn_ignore;
+ uint64_t learn_fail;
+ uint64_t unlearn_done;
+ uint64_t unlearn_ignore;
+ uint64_t auto_unlearn_done;
+ uint64_t auto_unlearn_ignore;
+ uint64_t auto_unlearn_fail;
+ uint64_t timeout_unlearn_done;
+ uint64_t rel_done;
+ uint64_t rel_ignore;
+ /* FLM 0.20 */
+ uint64_t prb_done;
+ uint64_t prb_ignore;
+ uint64_t sta_done;
+ uint64_t inf_done;
+ uint64_t inf_skip;
+ uint64_t pck_hit;
+ uint64_t pck_miss;
+ uint64_t pck_unh;
+ uint64_t pck_dis;
+ uint64_t csh_hit;
+ uint64_t csh_miss;
+ uint64_t csh_unh;
+ uint64_t cuc_start;
+ uint64_t cuc_move;
+ /* FLM 0.17 Load */
+ uint64_t load_lps;
+ uint64_t load_aps;
+ uint64_t max_lps;
+ uint64_t max_aps;
};
struct nt4ga_stat_s {
@@ -200,6 +233,9 @@ struct nt4ga_stat_s {
struct host_buffer_counters *mp_stat_structs_hb;
struct port_load_counters *mp_port_load;
+ int flm_stat_ver;
+ struct flm_counters_v1 *mp_stat_structs_flm;
+
/* Rx/Tx totals: */
uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index a6c4fec0be..e59ac5bdb3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -31,6 +31,7 @@ sources = files(
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
'ntnic_filter/ntnic_filter.c',
+ 'ntnic_xstats/ntnic_xstats.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f94340f489..f6a74c7df2 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1496,6 +1496,113 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
return 0;
}
+static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats =
+ ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+
+ struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return dpdk_stats_reset(internals, p_nt_drv, if_index);
+}
+
+static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names(p_nt4ga_stat, xstats_names, size);
+}
+
+static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names_by_id(p_nt4ga_stat, xstats_names, ids,
+ size);
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1594,6 +1701,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
+ .xstats_get = eth_xstats_get,
+ .xstats_get_names = eth_xstats_get_names,
+ .xstats_reset = eth_xstats_reset,
+ .xstats_get_by_id = eth_xstats_get_by_id,
+ .xstats_get_names_by_id = eth_xstats_get_names_by_id,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 355e2032b1..6737d18a6f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -192,3 +192,18 @@ const struct rte_flow_ops *get_dev_flow_ops(void)
return dev_flow_ops;
}
+
+static struct ntnic_xstats_ops *ntnic_xstats_ops;
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops)
+{
+ ntnic_xstats_ops = ops;
+}
+
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void)
+{
+ if (ntnic_xstats_ops == NULL)
+ ntnic_xstats_ops_init();
+
+ return ntnic_xstats_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8703d478b6..65e7972c68 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,10 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_flow_driver.h"
+
#include "flow_api.h"
#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
@@ -354,4 +358,28 @@ void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
+struct ntnic_xstats_ops {
+ int (*nthw_xstats_get_names)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
+ int (*nthw_xstats_get)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port);
+ void (*nthw_xstats_reset)(nt4ga_stat_t *p_nt4ga_stat, uint8_t port);
+ int (*nthw_xstats_get_names_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size);
+ int (*nthw_xstats_get_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port);
+};
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops);
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void);
+void ntnic_xstats_ops_init(void);
+
#endif /* __NTNIC_MOD_REG_H__ */
diff --git a/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
new file mode 100644
index 0000000000..7604afe6a0
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
@@ -0,0 +1,829 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_ethdev.h>
+
+#include "include/ntdrv_4ga.h"
+#include "ntlog.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "stream_binary_flow_api.h"
+#include "ntnic_mod_reg.h"
+
+struct rte_nthw_xstats_names_s {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint8_t source;
+ unsigned int offset;
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.17
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v1[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.18
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v2[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * STA 0.9
+ */
+
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v3[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) },
+
+ /* FLM 0.17 */
+ { "flm_count_load_lps", 3, offsetof(struct flm_counters_v1, load_lps) },
+ { "flm_count_load_aps", 3, offsetof(struct flm_counters_v1, load_aps) },
+ { "flm_count_max_lps", 3, offsetof(struct flm_counters_v1, max_lps) },
+ { "flm_count_max_aps", 3, offsetof(struct flm_counters_v1, max_aps) },
+
+ { "rx_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps) },
+ { "rx_max_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps_max) },
+ { "rx_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps) },
+ { "rx_max_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps_max) },
+ { "tx_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps) },
+ { "tx_max_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps_max) },
+ { "tx_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps) },
+ { "tx_max_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps_max) }
+};
+
+#define NTHW_CAP_XSTATS_NAMES_V1 RTE_DIM(nthw_cap_xstats_names_v1)
+#define NTHW_CAP_XSTATS_NAMES_V2 RTE_DIM(nthw_cap_xstats_names_v2)
+#define NTHW_CAP_XSTATS_NAMES_V3 RTE_DIM(nthw_cap_xstats_names_v3)
+
+/*
+ * Container for the reset values
+ */
+#define NTHW_XSTATS_SIZE NTHW_CAP_XSTATS_NAMES_V3
+
+static uint64_t nthw_xstats_reset_val[NUM_ADAPTER_PORTS_MAX][NTHW_XSTATS_SIZE] = { 0 };
+
+/*
+ * These functions must only be called with stat mutex locked
+ */
+static int nthw_xstats_get(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n && i < nb_names; i++) {
+ stats[i].id = i;
+
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ stats[i].value = *((uint64_t *)&rx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 2:
+ /* TX stat */
+ stats[i].value = *((uint64_t *)&tx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ stats[i].value = *((uint64_t *)&flm_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[0][i];
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ stats[i].value = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ default:
+ stats[i].value = 0;
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int nthw_xstats_get_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+ int count = 0;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] < nb_names) {
+ switch (names[ids[i]].source) {
+ case 1:
+ /* RX stat */
+ values[i] = *((uint64_t *)&rx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 2:
+ /* TX stat */
+ values[i] = *((uint64_t *)&tx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ values[i] = *((uint64_t *)&flm_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[0][ids[i]];
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ values[i] = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ default:
+ values[i] = 0;
+ break;
+ }
+
+ count++;
+ }
+ }
+
+ return count;
+}
+
+static void nthw_xstats_reset(nt4ga_stat_t *p_nt4ga_stat, uint8_t port)
+{
+ unsigned int i;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < nb_names; i++) {
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&rx_ptr[names[i].offset]);
+ break;
+
+ case 2:
+ /* TX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&tx_ptr[names[i].offset]);
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ /* Reset makes no sense for flm_count_current */
+ /* Reset can't be used for load_lps, load_aps, max_lps and max_aps */
+ if (flm_ptr &&
+ (strcmp(names[i].name, "flm_count_current") != 0 &&
+ strcmp(names[i].name, "flm_count_load_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_load_aps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_aps") != 0)) {
+ nthw_xstats_reset_val[0][i] =
+ *((uint64_t *)&flm_ptr[names[i].offset]);
+ }
+
+ break;
+
+ case 4:
+ /* Port load stat*/
+ /* No reset */
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/*
+ * These functions does not require stat mutex locked
+ */
+static int nthw_xstats_get_names(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size && i < nb_names; i++) {
+ strlcpy(xstats_names[i].name, names[i].name, sizeof(xstats_names[i].name));
+ count++;
+ }
+
+ return count;
+}
+
+static int nthw_xstats_get_names_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] < nb_names) {
+ strlcpy(xstats_names[i].name,
+ names[ids[i]].name,
+ RTE_ETH_XSTATS_NAME_SIZE);
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+static struct ntnic_xstats_ops ops = {
+ .nthw_xstats_get_names = nthw_xstats_get_names,
+ .nthw_xstats_get = nthw_xstats_get,
+ .nthw_xstats_reset = nthw_xstats_reset,
+ .nthw_xstats_get_names_by_id = nthw_xstats_get_names_by_id,
+ .nthw_xstats_get_by_id = nthw_xstats_get_by_id
+};
+
+void ntnic_xstats_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "xstats module was initialized");
+ register_ntnic_xstats_ops(&ops);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 62/73] net/ntnic: added flow statistics
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (60 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 61/73] net/ntnic: add xstats Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 63/73] net/ntnic: add scrub registers Serhii Iliushyk
` (10 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
xstats was extended with flow statistics support.
Additional counters that shows learn, unlearn, lps, aps
and other.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 40 ++++
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +-
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 142 ++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.c | 176 ++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 52 ++++++
.../profile_inline/flow_api_profile_inline.c | 46 +++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +++++
drivers/net/ntnic/ntnic_ethdev.c | 132 +++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +
13 files changed, 656 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 3afc5b7853..8fedfdcd04 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -189,6 +189,24 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return -1;
}
+ if (get_flow_filter_ops() != NULL) {
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+ p_nt4ga_stat->flm_stat_ver = ndev->be.flm.ver;
+ p_nt4ga_stat->mp_stat_structs_flm = calloc(1, sizeof(struct flm_counters_v1));
+
+ if (!p_nt4ga_stat->mp_stat_structs_flm) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_APS_MAX, 0);
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_LPS_MAX, 0);
+ }
+
p_nt4ga_stat->mp_port_load =
calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
@@ -236,6 +254,7 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
return -1;
nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
@@ -542,6 +561,27 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
(uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
}
+ /* Update and get FLM stats */
+ flow_filter_ops->flow_get_flm_stats(ndev, (uint64_t *)p_nt4ga_stat->mp_stat_structs_flm,
+ sizeof(struct flm_counters_v1) / sizeof(uint64_t));
+
+ /*
+ * Calculate correct load values:
+ * rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ * bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) - 1ULL);
+ * load_aps = ((uint64_t)load_aps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ * load_lps = ((uint64_t)load_lps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ *
+ * Simplified it gives:
+ *
+ * load_lps = (load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ * load_aps = (load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ */
+
+ p_nt4ga_stat->mp_stat_structs_flm->load_aps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
+ p_nt4ga_stat->mp_stat_structs_flm->load_lps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
return 0;
}
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 17d5755634..9cd9d92823 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 38e4d0ca35..677aa7b6c8 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -17,6 +17,7 @@ typedef struct ntdrv_4ga_s {
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
rte_thread_t stat_thread;
+ rte_thread_t port_event_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e59ac5bdb3..c0b7729929 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -59,6 +59,7 @@ sources = files(
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
+ 'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index e953fc1a12..efe9a1a3b9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1050,11 +1050,14 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
- (void)ndev;
- (void)data;
- (void)size;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ return -1;
+
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE)
+ return profile_inline_ops->flow_get_flm_stats_profile_inline(ndev, data, size);
- NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
return -1;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f4c29b8bde..1845f74166 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,148 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_stat_update(be->be_dev, &be->flm);
+}
+
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STAT_LRN_DONE:
+ *value = be->flm.v25.lrn_done->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_IGNORE:
+ *value = be->flm.v25.lrn_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_FAIL:
+ *value = be->flm.v25.lrn_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_DONE:
+ *value = be->flm.v25.unl_done->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_IGNORE:
+ *value = be->flm.v25.unl_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_DONE:
+ *value = be->flm.v25.rel_done->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_IGNORE:
+ *value = be->flm.v25.rel_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_DONE:
+ *value = be->flm.v25.prb_done->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_IGNORE:
+ *value = be->flm.v25.prb_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_DONE:
+ *value = be->flm.v25.aul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_IGNORE:
+ *value = be->flm.v25.aul_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_FAIL:
+ *value = be->flm.v25.aul_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_TUL_DONE:
+ *value = be->flm.v25.tul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_FLOWS:
+ *value = be->flm.v25.flows->cnt;
+ break;
+
+ case HW_FLM_LOAD_LPS:
+ *value = be->flm.v25.load_lps->lps;
+ break;
+
+ case HW_FLM_LOAD_APS:
+ *value = be->flm.v25.load_aps->aps;
+ break;
+
+ default: {
+ if (_VER_ < 18)
+ return UNSUP_FIELD;
+
+ switch (field) {
+ case HW_FLM_STAT_STA_DONE:
+ *value = be->flm.v25.sta_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_DONE:
+ *value = be->flm.v25.inf_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_SKIP:
+ *value = be->flm.v25.inf_skip->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_HIT:
+ *value = be->flm.v25.pck_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_MISS:
+ *value = be->flm.v25.pck_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_UNH:
+ *value = be->flm.v25.pck_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_DIS:
+ *value = be->flm.v25.pck_dis->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_HIT:
+ *value = be->flm.v25.csh_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_MISS:
+ *value = be->flm.v25.csh_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_UNH:
+ *value = be->flm.v25.csh_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_START:
+ *value = be->flm.v25.cuc_start->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_MOVE:
+ *value = be->flm.v25.cuc_move->cnt;
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+ }
+ break;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
new file mode 100644
index 0000000000..98b0e8347a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -0,0 +1,176 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+#include <rte_errno.h>
+
+#include "ntlog.h"
+#include "flm_evt_queue.h"
+
+/* Local queues for flm statistic events */
+static struct rte_ring *info_q_local[MAX_INFO_LCL_QUEUES];
+
+/* Remote queues for flm statistic events */
+static struct rte_ring *info_q_remote[MAX_INFO_RMT_QUEUES];
+
+/* Local queues for flm status records */
+static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
+
+/* Remote queues for flm status records */
+static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+
+
+static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
+{
+ static_assert((FLM_EVT_ELEM_SIZE & ~(size_t)3) == FLM_EVT_ELEM_SIZE,
+ "FLM EVENT struct size");
+ static_assert((FLM_STAT_ELEM_SIZE & ~(size_t)3) == FLM_STAT_ELEM_SIZE,
+ "FLM STAT struct size");
+ char name[20] = "NONE";
+ struct rte_ring *q;
+ uint32_t elem_size = 0;
+ uint32_t queue_size = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port >= MAX_INFO_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_INFO_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port >= MAX_INFO_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_INFO_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port >= MAX_STAT_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_STAT_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port >= MAX_STAT_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_STAT_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue create illegal caller: %u", caller);
+ return NULL;
+ }
+
+ q = rte_ring_create_elem(name,
+ elem_size,
+ queue_size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN, FILTER, "FLM queues cannot be created due to error %02X", rte_errno);
+ return NULL;
+ }
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ info_q_local[port] = q;
+ break;
+
+ case FLM_INFO_REMOTE:
+ info_q_remote[port] = q;
+ break;
+
+ case FLM_STAT_LOCAL:
+ stat_q_local[port] = q;
+ break;
+
+ case FLM_STAT_REMOTE:
+ stat_q_remote[port] = q;
+ break;
+
+ default:
+ break;
+ }
+
+ return q;
+}
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES) {
+ if (info_q_local[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_local[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_LOCAL) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES) {
+ if (info_q_remote[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_remote[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_REMOTE) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ return -ENOENT;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
new file mode 100644
index 0000000000..238be7a3b2
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_EVT_QUEUE_H_
+#define _FLM_EVT_QUEUE_H_
+
+#include "stdint.h"
+#include "stdbool.h"
+
+struct flm_status_event_s {
+ void *flow;
+ uint32_t learn_ignore : 1;
+ uint32_t learn_failed : 1;
+ uint32_t learn_done : 1;
+};
+
+struct flm_info_event_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum {
+ FLM_INFO_LOCAL,
+ FLM_INFO_REMOTE,
+ FLM_STAT_LOCAL,
+ FLM_STAT_REMOTE,
+};
+
+/* Max number of local queues */
+#define MAX_INFO_LCL_QUEUES 8
+#define MAX_STAT_LCL_QUEUES 8
+
+/* Max number of remote queues */
+#define MAX_INFO_RMT_QUEUES 128
+#define MAX_STAT_RMT_QUEUES 128
+
+/* queue size */
+#define FLM_EVT_QUEUE_SIZE 8192
+#define FLM_STAT_QUEUE_SIZE 8192
+
+/* Event element size */
+#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
+#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+
+#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index bbf450697c..a1cba7f4c7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4467,6 +4467,48 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
return 0;
}
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ const enum hw_flm_e fields[] = {
+ HW_FLM_STAT_FLOWS, HW_FLM_STAT_LRN_DONE, HW_FLM_STAT_LRN_IGNORE,
+ HW_FLM_STAT_LRN_FAIL, HW_FLM_STAT_UNL_DONE, HW_FLM_STAT_UNL_IGNORE,
+ HW_FLM_STAT_AUL_DONE, HW_FLM_STAT_AUL_IGNORE, HW_FLM_STAT_AUL_FAIL,
+ HW_FLM_STAT_TUL_DONE, HW_FLM_STAT_REL_DONE, HW_FLM_STAT_REL_IGNORE,
+ HW_FLM_STAT_PRB_DONE, HW_FLM_STAT_PRB_IGNORE,
+
+ HW_FLM_STAT_STA_DONE, HW_FLM_STAT_INF_DONE, HW_FLM_STAT_INF_SKIP,
+ HW_FLM_STAT_PCK_HIT, HW_FLM_STAT_PCK_MISS, HW_FLM_STAT_PCK_UNH,
+ HW_FLM_STAT_PCK_DIS, HW_FLM_STAT_CSH_HIT, HW_FLM_STAT_CSH_MISS,
+ HW_FLM_STAT_CSH_UNH, HW_FLM_STAT_CUC_START, HW_FLM_STAT_CUC_MOVE,
+
+ HW_FLM_LOAD_LPS, HW_FLM_LOAD_APS,
+ };
+
+ const uint64_t fields_cnt = sizeof(fields) / sizeof(enum hw_flm_e);
+
+ if (!ndev->flow_mgnt_prepared)
+ return 0;
+
+ if (size < fields_cnt)
+ return -1;
+
+ hw_mod_flm_stat_update(&ndev->be);
+
+ for (uint64_t i = 0; i < fields_cnt; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_stat_get(&ndev->be, fields[i], &value);
+ data[i] = (fields[i] == HW_FLM_STAT_FLOWS || fields[i] == HW_FLM_LOAD_LPS ||
+ fields[i] == HW_FLM_LOAD_APS)
+ ? value
+ : data[i] + value;
+
+ if (ndev->be.flm.ver < 18 && fields[i] == HW_FLM_STAT_PRB_IGNORE)
+ break;
+ }
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4483,6 +4525,10 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * Stats
+ */
+ .flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index c695842077..b44d3a7291 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -52,4 +52,10 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+/*
+ * Stats
+ */
+
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/rte_pmd_ntnic.h b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
new file mode 100644
index 0000000000..4a1ba18a5e
--- /dev/null
+++ b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
@@ -0,0 +1,43 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTNIC_EVENT_H_
+#define NTNIC_EVENT_H_
+
+#include <rte_ethdev.h>
+
+typedef struct ntnic_flm_load_s {
+ uint64_t lookup;
+ uint64_t lookup_maximum;
+ uint64_t access;
+ uint64_t access_maximum;
+} ntnic_flm_load_t;
+
+typedef struct ntnic_port_load_s {
+ uint64_t rx_pps;
+ uint64_t rx_pps_maximum;
+ uint64_t tx_pps;
+ uint64_t tx_pps_maximum;
+ uint64_t rx_bps;
+ uint64_t rx_bps_maximum;
+ uint64_t tx_bps;
+ uint64_t tx_bps_maximum;
+} ntnic_port_load_t;
+
+struct ntnic_flm_statistic_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum rte_ntnic_event_type {
+ RTE_NTNIC_FLM_LOAD_EVENT = RTE_ETH_EVENT_MAX,
+ RTE_NTNIC_PORT_LOAD_EVENT,
+ RTE_NTNIC_FLM_STATS_EVENT,
+};
+
+#endif /* NTNIC_EVENT_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index f6a74c7df2..9c286a4f35 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,8 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_evt_queue.h"
+#include "rte_pmd_ntnic.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
@@ -1419,6 +1421,7 @@ drv_deinit(struct drv_s *p_drv)
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
}
/* stop adapter */
@@ -1711,6 +1714,123 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.rss_hash_conf_get = rss_hash_conf_get,
};
+/*
+ * Port event thread
+ */
+THREAD_FUNC port_event_thread_fn(void *context)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[internals->port_id];
+ uint8_t port_no = internals->port;
+
+ ntnic_flm_load_t flmdata;
+ ntnic_port_load_t portdata;
+
+ memset(&flmdata, 0, sizeof(flmdata));
+ memset(&portdata, 0, sizeof(portdata));
+
+ while (ndev != NULL && ndev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ /*
+ * FLM load measurement
+ * Do only send event, if there has been a change
+ */
+ if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
+ if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
+ flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
+ flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
+ flmdata.lookup_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps;
+ flmdata.access_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_FLM_LOAD_EVENT,
+ &flmdata);
+ }
+ }
+ }
+
+ /*
+ * Port load measurement
+ * Do only send event, if there has been a change.
+ */
+ if (p_nt4ga_stat->mp_port_load) {
+ if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
+ portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
+ portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
+ portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
+ portdata.tx_pps = p_nt4ga_stat->mp_port_load[port_no].tx_pps;
+ portdata.rx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_pps_max;
+ portdata.tx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_pps_max;
+ portdata.rx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
+ portdata.tx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_PORT_LOAD_EVENT,
+ &portdata);
+ }
+ }
+ }
+
+ /* Process events */
+ {
+ int count = 0;
+ bool do_wait = true;
+
+ while (count < 5000) {
+ /* Local FLM statistic events */
+ struct flm_info_event_s data;
+
+ if (flm_inf_queue_get(port_no, FLM_INFO_LOCAL, &data) == 0) {
+ if (eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ struct ntnic_flm_statistic_s event_data;
+ event_data.bytes = data.bytes;
+ event_data.packets = data.packets;
+ event_data.cause = data.cause;
+ event_data.id = data.id;
+ event_data.timestamp = data.timestamp;
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)
+ RTE_NTNIC_FLM_STATS_EVENT,
+ &event_data);
+ do_wait = false;
+ }
+ }
+
+ if (do_wait)
+ nt_os_wait_usec(10);
+
+ count++;
+ do_wait = true;
+ }
+ }
+ }
+
+ return THREAD_RETURN;
+}
+
/*
* Adapter flm stat thread
*/
@@ -2237,6 +2357,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+
+ /* Port event thread */
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
+ port_event_thread_fn, (void *)internals);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
}
return 0;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 65e7972c68..7325bd1ea8 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -290,6 +290,13 @@ struct profile_inline_ops {
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+ /*
+ * Stats
+ */
+ int (*flow_get_flm_stats_profile_inline)(struct flow_nic_dev *ndev,
+ uint64_t *data,
+ uint64_t size);
+
/*
* NT Flow FLM queue API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 63/73] net/ntnic: add scrub registers
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (61 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 62/73] net/ntnic: added flow statistics Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 64/73] net/ntnic: update documentation Serhii Iliushyk
` (9 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Scrub fields were added to the fpga map file
Remove duplicated macro
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 17 ++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 3 ---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 620968ceb6..f1033ca949 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -728,7 +728,7 @@ static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
{ FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
{ FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
{ FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
- { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 }, { FLM_LRN_DATA_SCRUB_PROF, 4, 712, 0x0000 },
{ FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
{ FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
{ FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
@@ -782,6 +782,18 @@ static nthw_fpga_field_init_s flm_scan_fields[] = {
{ FLM_SCAN_I, 16, 0, 0 },
};
+static nthw_fpga_field_init_s flm_scrub_ctrl_fields[] = {
+ { FLM_SCRUB_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_SCRUB_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scrub_data_fields[] = {
+ { FLM_SCRUB_DATA_DEL, 1, 12, 0 },
+ { FLM_SCRUB_DATA_INF, 1, 13, 0 },
+ { FLM_SCRUB_DATA_R, 4, 8, 0 },
+ { FLM_SCRUB_DATA_T, 8, 0, 0 },
+};
+
static nthw_fpga_field_init_s flm_status_fields[] = {
{ FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
{ FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
@@ -921,6 +933,8 @@ static nthw_fpga_register_init_s flm_registers[] = {
{ FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
{ FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
{ FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_SCRUB_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_scrub_ctrl_fields },
+ { FLM_SCRUB_DATA, 11, 14, NTHW_FPGA_REG_TYPE_WO, 0, 4, flm_scrub_data_fields },
{ FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
{ FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
{ FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
@@ -3058,6 +3072,7 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
+ { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 9c286a4f35..263b3ee7d4 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -47,9 +47,6 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1)
#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1)
-/* Max RSS queues */
-#define MAX_QUEUES 125
-
#define NUM_VQ_SEGS(_data_size_) \
({ \
size_t _size = (_data_size_); \
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 64/73] net/ntnic: update documentation
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (62 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 63/73] net/ntnic: add scrub registers Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 65/73] net/ntnic: add flow aging API Serhii Iliushyk
` (8 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Update required documentation
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 30 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 2 ++
2 files changed, 32 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 2c160ae592..e7e1cbcff7 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -40,6 +40,36 @@ Features
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
+- Multiple TX and RX queues.
+- Scattered and gather for TX and RX.
+- RSS hash
+- RSS key update
+- RSS based on VLAN or 5-tuple.
+- RSS using different combinations of fields: L3 only, L4 only or both, and
+ source only, destination only or both.
+- Several RSS hash keys, one for each flow type.
+- Default RSS operation with no hash key specification.
+- VLAN filtering.
+- RX VLAN stripping via raw decap.
+- TX VLAN insertion via raw encap.
+- Flow API.
+- Multiple process.
+- Tunnel types: GTP.
+- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
+ verification.
+- Support for multiple rte_flow groups.
+- Encapsulation and decapsulation of GTP data.
+- Packet modification: NAT, TTL decrement, DSCP tagging
+- Traffic mirroring.
+- Jumbo frame support.
+- Port and queue statistics.
+- RMON statistics in extended stats.
+- Flow metering, including meter policy API.
+- Link state information.
+- CAM and TCAM based matching.
+- Exact match of 140 million flows and policies.
+- Basic stats
+- Extended stats
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..75769d1992 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -160,6 +160,8 @@ New Features
* Added NT flow backend initialization.
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+ * Added flow handling API
+ * Added statistics API
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 65/73] net/ntnic: add flow aging API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (63 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 64/73] net/ntnic: update documentation Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 66/73] net/ntnic: add aging API to the inline profile Serhii Iliushyk
` (7 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
add flow aging API to the ops structure
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 71 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 88 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 21 +++++
3 files changed, 180 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index efe9a1a3b9..b101a9462e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,70 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+static int flow_get_aged_flows(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline_ops uninitialized");
+ return -1;
+ }
+
+ if (nb_contexts > 0 && !context) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "rte_flow_get_aged_flows - empty context";
+ return -1;
+ }
+
+ return profile_inline_ops->flow_get_aged_flows_profile_inline(dev, caller_id, context,
+ nb_contexts, error);
+}
+
+static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_info;
+ (void)queue_info;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_attr;
+ (void)queue_attr;
+ (void)nb_queue;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1076,6 +1140,13 @@ static const struct flow_filter_ops ops = {
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
+ .flow_get_aged_flows = flow_get_aged_flows,
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ .flow_info_get = flow_info_get,
+ .flow_configure = flow_configure,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index e2fce02afa..9f8670b32d 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -718,6 +718,91 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_get_aged_flows(internals->flw_dev, caller_id, context,
+ nb_contexts, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+/*
+ * NT Flow asynchronous operations API
+ */
+
+static int eth_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_info_get(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (struct rte_flow_port_info *)port_info,
+ (struct rte_flow_queue_info *)queue_info,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_configure(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (const struct rte_flow_port_attr *)port_attr,
+ nb_queue,
+ (const struct rte_flow_queue_attr **)queue_attr,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -844,6 +929,9 @@ static const struct rte_flow_ops dev_flow_ops = {
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
+ .get_aged_flows = eth_flow_get_aged_flows,
+ .info_get = eth_flow_info_get,
+ .configure = eth_flow_configure,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 7325bd1ea8..52f197e873 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -286,6 +286,12 @@ struct profile_inline_ops {
FILE *file,
struct rte_flow_error *error);
+ int (*flow_get_aged_flows_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -355,6 +361,21 @@ struct flow_filter_ops {
int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
+
+ int (*flow_get_aged_flows)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
+ int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 66/73] net/ntnic: add aging API to the inline profile
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (64 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 65/73] net/ntnic: add flow aging API Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 67/73] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
` (6 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Added implementation for flow get aging API.
Module which operate with age queue was extended with
get, count and size operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../flow_api/profile_inline/flm_age_queue.c | 49 ++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 24 +++++++++
.../profile_inline/flow_api_profile_inline.c | 51 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 6 +++
5 files changed, 131 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index c0b7729929..8c6d02a5ec 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -58,6 +58,7 @@ sources = files(
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_age_queue.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
new file mode 100644
index 0000000000..f6f04009fe
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -0,0 +1,49 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <rte_ring.h>
+
+#include "ntlog.h"
+#include "flm_age_queue.h"
+
+/* Queues for flm aged events */
+static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue empty");
+
+ return ret;
+ }
+
+ return -ENOENT;
+}
+
+unsigned int flm_age_queue_count(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_count(age_queue[caller_id]);
+
+ return ret;
+}
+
+unsigned int flm_age_queue_get_size(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_get_size(age_queue[caller_id]);
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
new file mode 100644
index 0000000000..d61609cc01
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -0,0 +1,24 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_AGE_QUEUE_H_
+#define _FLM_AGE_QUEUE_H_
+
+#include "stdint.h"
+
+struct flm_age_event_s {
+ void *context;
+};
+
+/* Max number of event queues */
+#define MAX_EVT_AGE_QUEUES 256
+
+#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
+unsigned int flm_age_queue_count(uint16_t caller_id);
+unsigned int flm_age_queue_get_size(uint16_t caller_id);
+
+#endif /* _FLM_AGE_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a1cba7f4c7..9e1ea2a166 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -7,6 +7,7 @@
#include "nt_util.h"
#include "hw_mod_backend.h"
+#include "flm_age_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -4395,6 +4396,55 @@ static void dump_flm_data(const uint32_t *data, FILE *file)
}
}
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ unsigned int queue_size = flm_age_queue_get_size(caller_id);
+
+ if (queue_size == 0) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size is not configured";
+ return -1;
+ }
+
+ unsigned int queue_count = flm_age_queue_count(caller_id);
+
+ if (context == NULL)
+ return queue_count;
+
+ if (queue_count < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size contains fewer records than the expected output";
+ return -1;
+ }
+
+ if (queue_size < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Defined aged queue size is smaller than the expected output";
+ return -1;
+ }
+
+ uint32_t idx;
+
+ for (idx = 0; idx < nb_contexts; ++idx) {
+ struct flm_age_event_s obj;
+ int ret = flm_age_queue_get(caller_id, &obj);
+
+ if (ret != 0)
+ break;
+
+ context[idx] = obj.context;
+ }
+
+ return idx;
+}
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -4525,6 +4575,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ .flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
/*
* Stats
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b44d3a7291..e1934bc6a6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -48,6 +48,12 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
FILE *file,
struct rte_flow_error *error);
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 67/73] net/ntnic: add flow info and flow configure APIs
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (65 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 66/73] net/ntnic: add aging API to the inline profile Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 68/73] net/ntnic: add flow aging event Serhii Iliushyk
` (5 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with flow info and create APIS.
Module which operate with age queue was extended with
create and free operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 19 +----
.../flow_api/profile_inline/flm_age_queue.c | 77 +++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 5 ++
.../profile_inline/flow_api_profile_inline.c | 62 ++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 9 +++
drivers/net/ntnic/ntnic_mod_reg.h | 9 +++
8 files changed, 169 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index ed96f77bc0..89f071d982 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -77,6 +77,9 @@ struct flow_eth_dev {
/* QSL_HSH index if RSS needed QSL v6+ */
int rss_target_id;
+ /* The size of buffer for aged out flow list */
+ uint32_t nb_aging_objects;
+
struct flow_eth_dev *next;
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 155a9e1fd6..604a896717 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -320,6 +320,7 @@ struct flow_handle {
uint32_t flm_teid;
uint8_t flm_rqi;
uint8_t flm_qfi;
+ uint8_t flm_scrub_prof;
};
};
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index b101a9462e..5349dc84ab 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1075,12 +1075,6 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_info;
- (void)queue_info;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1088,20 +1082,14 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_info_get_profile_inline(dev, caller_id, port_info,
+ queue_info, error);
}
static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_attr;
- (void)queue_attr;
- (void)nb_queue;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1109,7 +1097,8 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_configure_profile_inline(dev, caller_id, port_attr,
+ nb_queue, queue_attr, error);
}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index f6f04009fe..fbc947ee1d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -4,12 +4,89 @@
*/
#include <rte_ring.h>
+#include <rte_errno.h>
#include "ntlog.h"
#include "flm_age_queue.h"
/* Queues for flm aged events */
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+
+void flm_age_queue_free(uint8_t port, uint16_t caller_id)
+{
+ struct rte_ring *q = NULL;
+
+ if (port < MAX_EVT_AGE_PORTS)
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ q = age_queue[caller_id];
+ age_queue[caller_id] = NULL;
+ }
+
+ if (q != NULL)
+ rte_ring_free(q);
+}
+
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
+{
+ char name[20];
+ struct rte_ring *q = NULL;
+
+ if (rte_is_power_of_2(count) == false || count > RTE_RING_SZ_MASK) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue number of elements (%u) is invalid, must be power of 2, and not exceed %u",
+ count,
+ RTE_RING_SZ_MASK);
+ return NULL;
+ }
+
+ if (port >= MAX_EVT_AGE_PORTS) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_EVT_AGE_PORTS - 1);
+ return NULL;
+ }
+
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id >= MAX_EVT_AGE_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for caller_id %u. Max supported caller_id is %u",
+ caller_id,
+ MAX_EVT_AGE_QUEUES - 1);
+ return NULL;
+ }
+
+ if (age_queue[caller_id] != NULL) {
+ NT_LOG(DBG, FILTER, "FLM aged event queue %u already created", caller_id);
+ return age_queue[caller_id];
+ }
+
+ snprintf(name, 20, "AGE_EVENT%u", caller_id);
+ q = rte_ring_create_elem(name,
+ FLM_AGE_ELEM_SIZE,
+ count,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created due to error %02X",
+ rte_errno);
+ return NULL;
+ }
+
+ age_queue[caller_id] = q;
+
+ return q;
+}
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index d61609cc01..9ff6ef6de0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -15,8 +15,13 @@ struct flm_age_event_s {
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
+/* Max number of event ports */
+#define MAX_EVT_AGE_PORTS 128
+
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9e1ea2a166..300d6712aa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -490,7 +490,7 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->ft = fh->flm_ft;
learn_record->kid = fh->flm_kid;
learn_record->eor = 1;
- learn_record->scrub_prof = 0;
+ learn_record->scrub_prof = fh->flm_scrub_prof;
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
return 0;
@@ -2439,6 +2439,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_rpl_ext_ptr = rpl_ext_ptr;
fh->flm_prio = (uint8_t)priority;
fh->flm_ft = (uint8_t)flm_ft;
+ fh->flm_scrub_prof = (uint8_t)flm_scrub;
for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
switch (fd->modify_field[i].select) {
@@ -4559,6 +4560,63 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ (void)queue_info;
+ (void)caller_id;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+ memset(port_info, 0, sizeof(struct rte_flow_port_info));
+
+ port_info->max_nb_aging_objects = dev->nb_aging_objects;
+
+ return res;
+}
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ (void)nb_queue;
+ (void)queue_attr;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (port_attr->nb_aging_objects > 0) {
+ if (dev->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ struct rte_ring *age_queue =
+ flm_age_queue_create(dev->port_id, caller_id, port_attr->nb_aging_objects);
+
+ if (age_queue == NULL) {
+ error->message = "Failed to allocate aging objects";
+ goto error_out;
+ }
+
+ dev->nb_aging_objects = port_attr->nb_aging_objects;
+ }
+
+ return res;
+
+error_out:
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+
+ if (port_attr->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ return -1;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4580,6 +4638,8 @@ static const struct profile_inline_ops ops = {
* Stats
*/
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
+ .flow_info_get_profile_inline = flow_info_get_profile_inline,
+ .flow_configure_profile_inline = flow_configure_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e1934bc6a6..ea1d9c31b2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -64,4 +64,13 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 52f197e873..15da911ca7 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -309,6 +309,15 @@ struct profile_inline_ops {
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
+
+ int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 68/73] net/ntnic: add flow aging event
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (66 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 67/73] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 69/73] net/ntnic: add termination thread Serhii Iliushyk
` (4 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Port thread was extended with new age event callback handler.
LRN, INF, STA registers getter setter was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 13 ++
drivers/net/ntnic/include/hw_mod_backend.h | 11 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 16 ++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 158 +++++++++++++++
.../flow_api/profile_inline/flm_age_queue.c | 28 +++
.../flow_api/profile_inline/flm_age_queue.h | 12 ++
.../flow_api/profile_inline/flm_evt_queue.c | 20 ++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_hw_db_inline.c | 142 +++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 84 ++++----
.../profile_inline/flow_api_profile_inline.c | 183 ++++++++++++++++++
.../flow_api_profile_inline_config.h | 21 +-
drivers/net/ntnic/ntnic_ethdev.c | 16 ++
14 files changed, 671 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 604a896717..c75e7cff83 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -148,6 +148,14 @@ struct hsh_def_s {
const uint8_t *key; /* Hash key. */
};
+/*
+ * AGE configuration, see struct rte_flow_action_age
+ */
+struct age_def_s {
+ uint32_t timeout;
+ void *context;
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -264,6 +272,11 @@ struct nic_flow_def {
* Hash module RSS definitions
*/
struct hsh_def_s hsh;
+
+ /*
+ * AGE action timeout
+ */
+ struct age_def_s age;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 9cd9d92823..7a36e4c6d6 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be);
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
@@ -695,8 +698,16 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
uint32_t *sta_word_cnt);
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt);
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc);
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
struct hsh_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 5635ac4524..a3f5e1d7f7 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -129,3 +129,19 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
pthread_mutex_unlock(&handle->mtx);
}
+
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
+
+ *caller_id = element->caller_id;
+ *type = element->type;
+ memcpy(flm_h, &element->handle, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index e190fe4a11..edb4f42729 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -20,4 +20,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
uint8_t type);
void ntnic_id_table_free_id(void *id_table, uint32_t id);
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 1845f74166..14dd95a150 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,52 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_buf_ctrl_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_buf_ctrl_mod_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value)
+{
+ int get = 1; /* Only get supported */
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_BUF_CTRL_LRN_FREE:
+ GET_SET(be->flm.v25.buf_ctrl->lrn_free, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_INF_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->inf_avail, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_STA_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->sta_avail, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_buf_ctrl_mod_get(be, field, value);
+}
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
{
return be->iface->flm_stat_update(be->be_dev, &be->flm);
@@ -887,3 +933,115 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
return ret;
}
+
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_INF_STA_DATA:
+ be->iface->flm_inf_sta_data_update(be->be_dev, &be->flm, inf_value,
+ inf_size, inf_word_cnt, sta_value,
+ sta_size, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+/*
+ * SCRUB timeout support functions to encode users' input into FPGA 8-bit time format:
+ * Timeout in seconds (2^30 nanoseconds); zero means disabled. Value is:
+ *
+ * (T[7:3] != 0) ? ((8 + T[2:0]) shift-left (T[7:3] - 1)) : T[2:0]
+ *
+ * The maximum allowed value is 0xEF (127 years).
+ *
+ * Note that this represents a lower bound on the timeout, depending on the flow
+ * scanner interval and overall load, the timeout can be substantially longer.
+ */
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc)
+{
+ uint8_t t_bits_2_0 = t_enc & 0x07;
+ uint8_t t_bits_7_3 = (t_enc >> 3) & 0x1F;
+ return t_bits_7_3 != 0 ? ((8 + t_bits_2_0) << (t_bits_7_3 - 1)) : t_bits_2_0;
+}
+
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t)
+{
+ uint32_t t_enc = 0;
+
+ if (t > 0) {
+ uint32_t t_dec = 0;
+
+ do {
+ t_enc++;
+ t_dec = hw_mod_flm_scrub_timeout_decode(t_enc);
+ } while (t_enc <= 0xEF && t_dec < t);
+ }
+
+ return t_enc;
+}
+
+static int hw_mod_flm_scrub_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCRUB_PRESET_ALL:
+ if (get)
+ return UNSUP_FIELD;
+
+ memset(&be->flm.v25.scrub[index], (uint8_t)*value,
+ sizeof(struct flm_v25_scrub_s));
+ break;
+
+ case HW_FLM_SCRUB_T:
+ GET_SET(be->flm.v25.scrub[index].t, value);
+ break;
+
+ case HW_FLM_SCRUB_R:
+ GET_SET(be->flm.v25.scrub[index].r, value);
+ break;
+
+ case HW_FLM_SCRUB_DEL:
+ GET_SET(be->flm.v25.scrub[index].del, value);
+ break;
+
+ case HW_FLM_SCRUB_INF:
+ GET_SET(be->flm.v25.scrub[index].inf, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_scrub_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index fbc947ee1d..76bbd57f65 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -13,6 +13,21 @@
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+__rte_always_inline int flm_age_event_get(uint8_t port)
+{
+ return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_set(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_clear(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+}
+
void flm_age_queue_free(uint8_t port, uint16_t caller_id)
{
struct rte_ring *q = NULL;
@@ -88,6 +103,19 @@ struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned
return q;
}
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue full");
+ }
+}
+
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 9ff6ef6de0..27154836c5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -12,6 +12,14 @@ struct flm_age_event_s {
void *context;
};
+/* Indicates why the flow info record was generated */
+#define INF_DATA_CAUSE_SW_UNLEARN 0
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED 1
+#define INF_DATA_CAUSE_NA 2
+#define INF_DATA_CAUSE_PERIODIC_FLOW_INFO 3
+#define INF_DATA_CAUSE_SW_PROBE 4
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT 5
+
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
@@ -20,8 +28,12 @@ struct flm_age_event_s {
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+int flm_age_event_get(uint8_t port);
+void flm_age_event_set(uint8_t port);
+void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 98b0e8347a..db9687714f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -138,6 +138,26 @@ static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
return q;
}
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
+{
+ struct rte_ring **stat_q = remote ? stat_q_remote : stat_q_local;
+
+ if (port >= (remote ? MAX_STAT_RMT_QUEUES : MAX_STAT_LCL_QUEUES))
+ return -1;
+
+ if (stat_q[port] == NULL) {
+ if (flm_evt_queue_create(port, remote ? FLM_STAT_REMOTE : FLM_STAT_LOCAL) == NULL)
+ return -1;
+ }
+
+ if (rte_ring_sp_enqueue_elem(stat_q[port], obj, FLM_STAT_ELEM_SIZE) != 0) {
+ NT_LOG(DBG, FILTER, "FLM local status queue full");
+ return -1;
+ }
+
+ return 0;
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 238be7a3b2..3a61f844b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,5 +48,6 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b5fee67e67..2fee6ae6b5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "rte_common.h"
#define HW_DB_INLINE_ACTION_SET_NB 512
@@ -57,12 +58,18 @@ struct hw_db_inline_resource_db {
int ref;
} *hsh;
+ struct hw_db_inline_resource_db_scrub {
+ struct hw_db_inline_scrub_data data;
+ int ref;
+ } *scrub;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
uint32_t nb_tpe;
uint32_t nb_tpe_ext;
uint32_t nb_hsh;
+ uint32_t nb_scrub;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -255,6 +262,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_scrub = ndev->be.flm.nb_scrub_profiles;
+ db->scrub = calloc(db->nb_scrub, sizeof(struct hw_db_inline_resource_db_scrub));
+
+ if (db->scrub == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
/* Preset data */
@@ -276,6 +291,7 @@ void hw_db_inline_destroy(void *db_handle)
free(db->tpe);
free(db->tpe_ext);
free(db->hsh);
+ free(db->scrub);
free(db->cat);
@@ -366,6 +382,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ hw_db_inline_scrub_deref(ndev, db_handle,
+ *(struct hw_db_flm_scrub_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -410,9 +431,9 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
else
fprintf(file,
- " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d, SCRUB id %d\n",
data->cot.ids, data->qsl.ids, data->slc_lr.ids,
- data->tpe.ids, data->hsh.ids);
+ data->tpe.ids, data->hsh.ids, data->scrub.ids);
break;
}
@@ -577,6 +598,15 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
break;
}
+ case HW_DB_IDX_TYPE_FLM_SCRUB: {
+ const struct hw_db_inline_scrub_data *data = &db->scrub[idxs[i].ids].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " SCRUB %d\n", idxs[i].ids);
+ fprintf(file, " Timeout: %d, encoded timeout: %d\n",
+ hw_mod_flm_scrub_timeout_decode(data->timeout), data->timeout);
+ break;
+ }
+
case HW_DB_IDX_TYPE_HSH: {
const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
fprintf(file, " HSH %d\n", idxs[i].ids);
@@ -690,6 +720,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_HSH:
return &db->hsh[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ return &db->scrub[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -1540,7 +1573,7 @@ static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_
return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
- data1->hsh.raw == data2->hsh.raw;
+ data1->hsh.raw == data2->hsh.raw && data1->scrub.raw == data2->scrub.raw;
}
struct hw_db_action_set_idx
@@ -2849,3 +2882,106 @@ void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->hsh[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* FML SCRUB */
+/******************************************************************************/
+
+static int hw_db_inline_scrub_compare(const struct hw_db_inline_scrub_data *data1,
+ const struct hw_db_inline_scrub_data *data2)
+{
+ return data1->timeout == data2->timeout;
+}
+
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_scrub_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_SCRUB;
+
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, i.e. flow will never AGE-out */
+ if (data->timeout == 0) {
+ idx.ids = 0;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_scrub; ++i) {
+ int ref = db->scrub[i].ref;
+
+ if (ref > 0 && hw_db_inline_scrub_compare(data, &db->scrub[i].data)) {
+ idx.ids = i;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ int res = hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, data->timeout);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_R, idx.ids,
+ NTNIC_SCANNER_TIMEOUT_RESOLUTION);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_DEL, idx.ids, SCRUB_DEL);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_INF, idx.ids, SCRUB_INF);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->scrub[idx.ids].ref = 1;
+ memcpy(&db->scrub[idx.ids].data, data, sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, idx.ids);
+
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ (void)ndev;
+
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->scrub[idx.ids].ref += 1;
+}
+
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->scrub[idx.ids].ref -= 1;
+
+ if (db->scrub[idx.ids].ref <= 0) {
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, which shall not be removed
+ */
+ if (idx.ids > 0) {
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->scrub[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, idx.ids);
+ }
+
+ db->scrub[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a9d31c86ea..c920d36cfd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -117,6 +117,10 @@ struct hw_db_flm_ft {
HW_DB_IDX;
};
+struct hw_db_flm_scrub_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -145,6 +149,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
+ HW_DB_IDX_TYPE_FLM_SCRUB,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -160,6 +165,43 @@ struct hw_db_inline_match_set_data {
uint8_t priority;
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
+ struct hw_db_tpe_idx tpe;
+ struct hw_db_hsh_idx hsh;
+ struct hw_db_flm_scrub_idx scrub;
+ };
+ };
+};
+
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+
+ struct hw_db_action_set_idx action_set;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -232,39 +274,8 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
-struct hw_db_inline_action_set_data {
- int contains_jump;
- union {
- int jump;
- struct {
- struct hw_db_cot_idx cot;
- struct hw_db_qsl_idx qsl;
- struct hw_db_slc_lr_idx slc_lr;
- struct hw_db_tpe_idx tpe;
- struct hw_db_hsh_idx hsh;
- };
- };
-};
-
-struct hw_db_inline_km_rcp_data {
- uint32_t rcp;
-};
-
-struct hw_db_inline_km_ft_data {
- struct hw_db_cat_idx cat;
- struct hw_db_km_idx km;
- struct hw_db_action_set_idx action_set;
-};
-
-struct hw_db_inline_flm_ft_data {
- /* Group zero flows should set jump. */
- /* Group nonzero flows should set group. */
- int is_group_zero;
- union {
- int jump;
- int group;
- };
- struct hw_db_action_set_idx action_set;
+struct hw_db_inline_scrub_data {
+ uint32_t timeout;
};
/**/
@@ -368,6 +379,13 @@ void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct
void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_flm_ft idx);
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data);
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 300d6712aa..af8ed9abdc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,7 @@
#include "hw_mod_backend.h"
#include "flm_age_queue.h"
+#include "flm_evt_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -20,6 +21,13 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define DMA_BLOCK_SIZE 256
+#define DMA_OVERHEAD 20
+#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
+#define MAX_STA_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_STA_DATA)
+#define WORDS_PER_INF_DATA (sizeof(struct flm_v25_inf_data_s) / sizeof(uint32_t))
+#define MAX_INF_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_INF_DATA)
+
#define NT_FLM_MISS_FLOW_TYPE 0
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
@@ -71,14 +79,127 @@ static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
return r.num;
}
+static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
+{
+ if (caller_id < MAX_VDPA_PORTS + 1) {
+ *port = caller_id;
+ return true;
+ }
+
+ *port = caller_id - MAX_VDPA_PORTS - 1;
+ return false;
+}
+
+static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_inf_data_s *inf_data =
+ (struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, inf_data->id, &flm_h, &caller_id,
+ &type);
+
+ /* Check that received record hold valid meter statistics */
+ if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
+
+ age_event.context = fh->context;
+
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
+ break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
+ }
+ }
+ }
+}
+
+static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_sta_data_s *sta_data =
+ (struct flm_v25_sta_data_s *)&data[i * WORDS_PER_STA_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, sta_data->id, &flm_h, &caller_id,
+ &type);
+
+ if (type == 1) {
+ uint8_t port;
+ bool remote_caller = is_remote_caller(caller_id, &port);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+ ((struct flow_handle *)flm_h.p)->learn_ignored = 1;
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ struct flm_status_event_s data = {
+ .flow = flm_h.p,
+ .learn_ignore = sta_data->lis,
+ .learn_failed = sta_data->lfs,
+ };
+
+ flm_sta_queue_put(port, remote_caller, &data);
+ }
+ }
+}
+
static uint32_t flm_update(struct flow_eth_dev *dev)
{
static uint32_t inf_word_cnt;
static uint32_t sta_word_cnt;
+ uint32_t inf_data[DMA_BLOCK_SIZE];
+ uint32_t sta_data[DMA_BLOCK_SIZE];
+
+ if (inf_word_cnt >= WORDS_PER_INF_DATA || sta_word_cnt >= WORDS_PER_STA_DATA) {
+ uint32_t inf_records = inf_word_cnt / WORDS_PER_INF_DATA;
+
+ if (inf_records > MAX_INF_DATA_RECORDS_PER_READ)
+ inf_records = MAX_INF_DATA_RECORDS_PER_READ;
+
+ uint32_t sta_records = sta_word_cnt / WORDS_PER_STA_DATA;
+
+ if (sta_records > MAX_STA_DATA_RECORDS_PER_READ)
+ sta_records = MAX_STA_DATA_RECORDS_PER_READ;
+
+ hw_mod_flm_inf_sta_data_update_get(&dev->ndev->be, HW_FLM_FLOW_INF_STA_DATA,
+ inf_data, inf_records * WORDS_PER_INF_DATA,
+ &inf_word_cnt, sta_data,
+ sta_records * WORDS_PER_STA_DATA,
+ &sta_word_cnt);
+
+ if (inf_records > 0)
+ flm_mtr_read_inf_records(dev, inf_data, inf_records);
+
+ if (sta_records > 0)
+ flm_mtr_read_sta_records(dev, sta_data, sta_records);
+
+ return 1;
+ }
+
if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
return 1;
+ hw_mod_flm_buf_ctrl_update(&dev->ndev->be);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_INF_AVAIL, &inf_word_cnt);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_STA_AVAIL, &sta_word_cnt);
+
return inf_word_cnt + sta_word_cnt;
}
@@ -1067,6 +1188,25 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_AGE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_AGE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_age age_tmp;
+ const struct rte_flow_action_age *age =
+ memcpy_mask_if(&age_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_age));
+ fd->age.timeout = hw_mod_flm_scrub_timeout_encode(age->timeout);
+ fd->age.context = age->context;
+ NT_LOG(DBG, FILTER,
+ "normalized timeout: %u, original timeout: %u, context: %p",
+ hw_mod_flm_scrub_timeout_decode(fd->age.timeout),
+ age->timeout, fd->age.context);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
@@ -2467,6 +2607,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
break;
}
}
+ fh->context = fd->age.context;
}
static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
@@ -2723,6 +2864,21 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup SCRUB profile */
+ struct hw_db_inline_scrub_data scrub_data = { .timeout = fd->age.timeout };
+ struct hw_db_flm_scrub_idx scrub_idx =
+ hw_db_inline_scrub_add(dev->ndev, dev->ndev->hw_db_handle, &scrub_data);
+ local_idxs[(*local_idx_counter)++] = scrub_idx.raw;
+
+ if (scrub_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM SCRUB resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_scrub)
+ *flm_scrub = scrub_idx.ids;
+
/* Setup Action Set */
struct hw_db_inline_action_set_data action_set_data = {
.contains_jump = 0,
@@ -2731,6 +2887,7 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
.slc_lr = slc_lr_idx,
.tpe = tpe_idx,
.hsh = hsh_idx,
+ .scrub = scrub_idx,
};
struct hw_db_action_set_idx action_set_idx =
hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
@@ -2797,6 +2954,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ fh->context = fd->age.context;
nic_insert_flow(dev->ndev, fh);
} else if (attr->group > 0) {
@@ -2853,6 +3011,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
*/
int identical_km_entry_ft = -1;
+ /* Setup Action Set */
+
+ /* SCRUB/AGE action is not supported for group 0 */
+ if (fd->age.timeout != 0 || fd->age.context != NULL) {
+ NT_LOG(ERR, FILTER, "Action AGE is not supported for flow in group 0");
+ flow_nic_set_error(ERR_ACTION_AGE_UNSUPPORTED_GROUP_0, error);
+ goto error_out;
+ }
+
+ /* NOTE: SCRUB record 0 is used by default with timeout 0, i.e. flow will never
+ * AGE-out
+ */
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -3349,6 +3519,15 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+ /* Initialize SCRUB with default index 0, i.e. flow will never AGE-out */
+ if (hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_flm_scrub_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -3484,6 +3663,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
index 8ba8b8f67a..3b53288ddf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -55,4 +55,23 @@
*/
#define NTNIC_SCANNER_LOAD 0.01
-#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
+/*
+ * This define sets the timeout resolution of aged flow scanner (scrubber).
+ *
+ * The timeout resolution feature is provided in order to reduce the number of
+ * write-back operations for flows without attached meter. If the resolution
+ * is disabled (set to 0) and flow timeout is enabled via age action, then a write-back
+ * occurs every the flow is evicted from the flow cache, essentially causing the
+ * lookup performance to drop to that of a flow with meter. By setting the timeout
+ * resolution (>0), write-back for flows happens only when the difference between
+ * the last recorded time for the flow and the current time exceeds the chosen resolution.
+ *
+ * The parameter value is a power of 2 in units of 2^28 nanoseconds. It means that value 8 sets
+ * the timeout resolution to: 2^8 * 2^28 / 1e9 = 68,7 seconds
+ *
+ * NOTE: This parameter has a significant impact on flow lookup performance, especially
+ * if full scanner timeout resolution (=0) is configured.
+ */
+#define NTNIC_SCANNER_TIMEOUT_RESOLUTION 8
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 263b3ee7d4..6cac8da17e 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,7 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_age_queue.h"
#include "profile_inline/flm_evt_queue.h"
#include "rte_pmd_ntnic.h"
@@ -1816,6 +1817,21 @@ THREAD_FUNC port_event_thread_fn(void *context)
}
}
+ /* AGED event */
+ /* Note: RTE_FLOW_PORT_FLAG_STRICT_QUEUE flag is not supported so
+ * event is always generated
+ */
+ int aged_event_count = flm_age_event_get(port_no);
+
+ if (aged_event_count > 0 && eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_FLOW_AGED,
+ NULL);
+ flm_age_event_clear(port_no);
+ do_wait = false;
+ }
+
if (do_wait)
nt_os_wait_usec(10);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 69/73] net/ntnic: add termination thread
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (67 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 68/73] net/ntnic: add flow aging event Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 70/73] net/ntnic: add aging documentation Serhii Iliushyk
` (3 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Introduce clear_pdrv to unregister driver
from global tracking.
Modify drv_deinit to call clear_pdirv and ensure
safe termination.
Add flm sta and age event free.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../flow_api/profile_inline/flm_age_queue.c | 10 +++
.../flow_api/profile_inline/flm_age_queue.h | 1 +
.../flow_api/profile_inline/flm_evt_queue.c | 76 +++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 6 ++
5 files changed, 94 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index 76bbd57f65..d916eccec7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -44,6 +44,16 @@ void flm_age_queue_free(uint8_t port, uint16_t caller_id)
rte_ring_free(q);
}
+void flm_age_queue_free_all(void)
+{
+ int i;
+ int j;
+
+ for (i = 0; i < MAX_EVT_AGE_PORTS; i++)
+ for (j = 0; j < MAX_EVT_AGE_QUEUES; j++)
+ flm_age_queue_free(i, j);
+}
+
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
{
char name[20];
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 27154836c5..55c410ac86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -32,6 +32,7 @@ int flm_age_event_get(uint8_t port);
void flm_age_event_set(uint8_t port);
void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+void flm_age_queue_free_all(void);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index db9687714f..761609a0ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -25,6 +25,82 @@ static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
/* Remote queues for flm status records */
static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+static void flm_inf_sta_queue_free(uint8_t port, uint8_t caller)
+{
+ struct rte_ring *q = NULL;
+
+ /* If queues is not created, then ignore and return */
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ q = info_q_local[port];
+ info_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ q = info_q_remote[port];
+ info_q_remote[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port < MAX_STAT_LCL_QUEUES && stat_q_local[port] != NULL) {
+ q = stat_q_local[port];
+ stat_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port < MAX_STAT_RMT_QUEUES && stat_q_remote[port] != NULL) {
+ q = stat_q_remote[port];
+ stat_q_remote[port] = NULL;
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ break;
+ }
+
+ if (q)
+ rte_ring_free(q);
+}
+
+void flm_inf_sta_queue_free_all(uint8_t caller)
+{
+ int count = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ count = MAX_INFO_LCL_QUEUES;
+ break;
+
+ case FLM_INFO_REMOTE:
+ count = MAX_INFO_RMT_QUEUES;
+ break;
+
+ case FLM_STAT_LOCAL:
+ count = MAX_STAT_LCL_QUEUES;
+ break;
+
+ case FLM_STAT_REMOTE:
+ count = MAX_STAT_RMT_QUEUES;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ return;
+ }
+
+ for (int i = 0; i < count; i++)
+ flm_inf_sta_queue_free(i, caller);
+}
static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 3a61f844b6..d61b282472 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -47,6 +47,7 @@ enum {
#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+void flm_inf_sta_queue_free_all(uint8_t caller);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 6cac8da17e..eca67dbd62 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1420,6 +1420,12 @@ drv_deinit(struct drv_s *p_drv)
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
THREAD_JOIN(p_nt_drv->port_event_thread);
+ /* Free all local flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_LOCAL);
+ /* Free all remote flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_REMOTE);
+ /* Free all aged flow event queues */
+ flm_age_queue_free_all();
}
/* stop adapter */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 70/73] net/ntnic: add aging documentation
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (68 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 69/73] net/ntnic: add termination thread Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 71/73] net/ntnic: add meter API Serhii Iliushyk
` (2 subsequent siblings)
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
ntnic.rst document was exntede with age feature specification.
ntnic.ini was extended with rte_flow action age support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 18 ++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 20 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 947c7ba3a1..af2981ccf6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -33,6 +33,7 @@ udp = Y
vlan = Y
[rte_flow actions]
+age = Y
drop = Y
jump = Y
mark = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e7e1cbcff7..e5a8d71892 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -148,3 +148,21 @@ FILTER
To enable logging on all levels use wildcard in the following way::
--log-level=pmd.net.ntnic.*,8
+
+Flow Scanner
+------------
+
+Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
+When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
+``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
+
+Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
+
+There are list of characteristics that ``age timeout`` action has:
+ - functions only in group > 0;
+ - flow timeout is specified in seconds;
+ - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
+ - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
+ - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
+ - after flow is aged-out it's not automatically deleted;
+ - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 75769d1992..b449b01dc8 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -162,6 +162,7 @@ New Features
* Added basic handling of the virtual queues.
* Added flow handling API
* Added statistics API
+ * Added age rte flow action support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 71/73] net/ntnic: add meter API
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (69 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 70/73] net/ntnic: add aging documentation Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 72/73] net/ntnic: add meter module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 73/73] net/ntnic: update meter documentation Serhii Iliushyk
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add meter API and implementation to the profile inline.
management functions were extended with meter flow support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 +
.../flow_api/profile_inline/flm_evt_queue.c | 21 +
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 560 +++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 27 +
6 files changed, 597 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 89f071d982..032063712a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -100,6 +100,7 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *flm_mtr_handle;
void *group_handle;
void *hw_db_handle;
void *id_table_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index c75e7cff83..b40a27fbf1 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -57,6 +57,7 @@ enum res_type_e {
#define MAX_TCAM_START_OFFSETS 4
+#define MAX_FLM_MTRS_SUPPORTED 4
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
@@ -223,6 +224,8 @@ struct nic_flow_def {
uint32_t jump_to_group;
+ uint32_t mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
int full_offload;
/*
@@ -320,6 +323,8 @@ struct flow_handle {
uint32_t flm_db_idx_counter;
uint32_t flm_db_idxs[RES_COUNT];
+ uint32_t flm_mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
uint32_t flm_data[10];
uint8_t flm_prot;
uint8_t flm_kid;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 761609a0ea..d76c7da568 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -234,6 +234,27 @@ int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
return 0;
}
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_local[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM local info queue full");
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_remote[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM remote info queue full");
+ }
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index d61b282472..ee8175cf25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,6 +48,7 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
void flm_inf_sta_queue_free_all(uint8_t caller);
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index af8ed9abdc..8b48b26a5e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define FLM_MTR_PROFILE_SIZE 0x100000
+#define FLM_MTR_STAT_SIZE 0x1000000
+#define UINT64_MSB ((uint64_t)1 << 63)
+
#define DMA_BLOCK_SIZE 256
#define DMA_OVERHEAD 20
#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
@@ -46,8 +50,336 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
+#define POLICING_PARAMETER_OFFSET 4096
+#define SIZE_CONVERTER 1099.511627776
+
+struct flm_mtr_stat_s {
+ struct dual_buckets_s *buckets;
+ atomic_uint_fast64_t n_pkt;
+ atomic_uint_fast64_t n_bytes;
+ uint64_t n_pkt_base;
+ uint64_t n_bytes_base;
+ atomic_uint_fast64_t stats_mask;
+ uint32_t flm_id;
+};
+
+struct flm_mtr_shared_stats_s {
+ struct flm_mtr_stat_s *stats;
+ uint32_t size;
+ int shared;
+};
+
+struct flm_flow_mtr_handle_s {
+ struct dual_buckets_s {
+ uint16_t rate_a;
+ uint16_t rate_b;
+ uint16_t size_a;
+ uint16_t size_b;
+ } dual_buckets[FLM_MTR_PROFILE_SIZE];
+
+ struct flm_mtr_shared_stats_s *port_stats[UINT8_MAX];
+};
+
static void *flm_lrn_queue_arr;
+static int flow_mtr_supported(struct flow_eth_dev *dev)
+{
+ return hw_mod_flm_present(&dev->ndev->be) && dev->ndev->be.flm.nb_variant == 2;
+}
+
+static uint64_t flow_mtr_meter_policy_n_max(void)
+{
+ return FLM_MTR_PROFILE_SIZE;
+}
+
+static inline uint64_t convert_policing_parameter(uint64_t value)
+{
+ uint64_t limit = POLICING_PARAMETER_OFFSET;
+ uint64_t shift = 0;
+ uint64_t res = value;
+
+ while (shift < 15 && value >= limit) {
+ limit <<= 1;
+ ++shift;
+ }
+
+ if (shift != 0) {
+ uint64_t tmp = POLICING_PARAMETER_OFFSET * (1 << (shift - 1));
+
+ if (tmp > value) {
+ res = 0;
+
+ } else {
+ tmp = value - tmp;
+ res = tmp >> (shift - 1);
+ }
+
+ if (res >= POLICING_PARAMETER_OFFSET)
+ res = POLICING_PARAMETER_OFFSET - 1;
+
+ res = res | (shift << 12);
+ }
+
+ return res;
+}
+
+static int flow_mtr_set_profile(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a, uint64_t bucket_rate_b,
+ uint64_t bucket_size_b)
+{
+ struct flow_nic_dev *ndev = dev->ndev;
+ struct flm_flow_mtr_handle_s *handle =
+ (struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle;
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ /* Round rates up to nearest 128 bytes/sec and shift to 128 bytes/sec units */
+ bucket_rate_a = (bucket_rate_a + 127) >> 7;
+ bucket_rate_b = (bucket_rate_b + 127) >> 7;
+
+ buckets->rate_a = convert_policing_parameter(bucket_rate_a);
+ buckets->rate_b = convert_policing_parameter(bucket_rate_b);
+
+ /* Round size down to 38-bit int */
+ if (bucket_size_a > 0x3fffffffff)
+ bucket_size_a = 0x3fffffffff;
+
+ if (bucket_size_b > 0x3fffffffff)
+ bucket_size_b = 0x3fffffffff;
+
+ /* Convert size to units of 2^40 / 10^9. Output is a 28-bit int. */
+ bucket_size_a = bucket_size_a / SIZE_CONVERTER;
+ bucket_size_b = bucket_size_b / SIZE_CONVERTER;
+
+ buckets->size_a = convert_policing_parameter(bucket_size_a);
+ buckets->size_b = convert_policing_parameter(bucket_size_b);
+
+ return 0;
+}
+
+static int flow_mtr_set_policy(struct flow_eth_dev *dev, uint32_t policy_id, int drop)
+{
+ (void)dev;
+ (void)policy_id;
+ (void)drop;
+ return 0;
+}
+
+static uint32_t flow_mtr_meters_supported(struct flow_eth_dev *dev, uint8_t caller_id)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ return handle->port_stats[caller_id]->size;
+}
+
+static int flow_mtr_create_meter(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t mtr_id,
+ uint32_t profile_id,
+ uint32_t policy_id,
+ uint64_t stats_mask)
+{
+ (void)policy_id;
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ union flm_handles flm_h;
+ flm_h.idx = mtr_id;
+ uint32_t flm_id = ntnic_id_table_get_id(dev->ndev->id_table_handle, flm_h, caller_id, 2);
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = buckets->rate_a;
+ learn_record->size = buckets->size_a;
+ learn_record->fill = buckets->size_a;
+
+ learn_record->ft_mbr =
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE; /* FT to assign if MBR has been exceeded */
+
+ learn_record->ent = 1;
+ learn_record->op = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ if (stats_mask)
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ mtr_stat[mtr_id].buckets = buckets;
+ mtr_stat[mtr_id].flm_id = flm_id;
+ atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 3;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 0;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ /* Clear statistics so stats_mask prevents updates of counters on deleted meters */
+ atomic_store(&mtr_stat[mtr_id].stats_mask, 0);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, 0);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, 0);
+ mtr_stat[mtr_id].n_bytes_base = 0;
+ mtr_stat[mtr_id].n_pkt_base = 0;
+ mtr_stat[mtr_id].buckets = NULL;
+
+ ntnic_id_table_free_id(dev->ndev->id_table_handle, flm_id);
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = &handle->port_stats[caller_id]->stats[mtr_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = mtr_stat->flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = mtr_stat->buckets->rate_a;
+ learn_record->size = mtr_stat->buckets->size_a;
+ learn_record->adj = adjust_value;
+
+ learn_record->ft_mbr = NT_FLM_VIOLATING_MBR_FLOW_TYPE;
+
+ learn_record->ent = 1;
+ learn_record->op = 2;
+ learn_record->eor = 1;
+
+ if (atomic_load(&mtr_stat->stats_mask))
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
static void flm_setup_queues(void)
{
flm_lrn_queue_arr = flm_lrn_queue_create();
@@ -92,6 +424,8 @@ static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
for (uint32_t i = 0; i < records; ++i) {
struct flm_v25_inf_data_s *inf_data =
(struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
@@ -102,29 +436,62 @@ static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, u
&type);
/* Check that received record hold valid meter statistics */
- if (type == 1) {
- switch (inf_data->cause) {
- case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
- case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
- struct flow_handle *fh = (struct flow_handle *)flm_h.p;
- struct flm_age_event_s age_event;
- uint8_t port;
+ if (type == 2) {
+ uint64_t mtr_id = flm_h.idx;
+
+ if (mtr_id < handle->port_stats[caller_id]->size) {
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[caller_id]->stats;
+
+ /* Don't update a deleted meter */
+ uint64_t stats_mask = atomic_load(&mtr_stat[mtr_id].stats_mask);
+
+ if (stats_mask) {
+ atomic_store(&mtr_stat[mtr_id].n_pkt,
+ inf_data->packets | UINT64_MSB);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, inf_data->bytes);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, inf_data->packets);
+ struct flm_info_event_s stat_data;
+ bool remote_caller;
+ uint8_t port;
+
+ remote_caller = is_remote_caller(caller_id, &port);
+
+ /* Save stat data to flm stat queue */
+ stat_data.bytes = inf_data->bytes;
+ stat_data.packets = inf_data->packets;
+ stat_data.id = mtr_id;
+ stat_data.timestamp = inf_data->ts;
+ stat_data.cause = inf_data->cause;
+ flm_inf_queue_put(port, remote_caller, &stat_data);
+ }
+ }
- age_event.context = fh->context;
+ /* Check that received record hold valid flow data */
- is_remote_caller(caller_id, &port);
+ } else if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
- flm_age_queue_put(caller_id, &age_event);
- flm_age_event_set(port);
- }
- break;
+ age_event.context = fh->context;
- case INF_DATA_CAUSE_SW_UNLEARN:
- case INF_DATA_CAUSE_NA:
- case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
- case INF_DATA_CAUSE_SW_PROBE:
- default:
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
}
}
}
@@ -203,6 +570,42 @@ static uint32_t flm_update(struct flow_eth_dev *dev)
return inf_word_cnt + sta_word_cnt;
}
+static void flm_mtr_read_stats(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ *stats_mask = atomic_load(&mtr_stat[id].stats_mask);
+
+ if (*stats_mask) {
+ uint64_t pkt_1;
+ uint64_t pkt_2;
+ uint64_t nb;
+
+ do {
+ do {
+ pkt_1 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 & UINT64_MSB);
+
+ nb = atomic_load(&mtr_stat[id].n_bytes);
+ pkt_2 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 != pkt_2);
+
+ *green_pkt = pkt_1 - mtr_stat[id].n_pkt_base;
+ *green_bytes = nb - mtr_stat[id].n_bytes_base;
+
+ if (clear) {
+ mtr_stat[id].n_pkt_base = pkt_1;
+ mtr_stat[id].n_bytes_base = nb;
+ }
+ }
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -492,6 +895,8 @@ static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
fd->mark = UINT32_MAX;
fd->jump_to_group = UINT32_MAX;
+ memset(fd->mtr_ids, 0xff, sizeof(uint32_t) * MAX_FLM_MTRS_SUPPORTED);
+
fd->l2_prot = -1;
fd->l3_prot = -1;
fd->l4_prot = -1;
@@ -587,9 +992,17 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->sw9 = fh->flm_data[0];
learn_record->prot = fh->flm_prot;
+ learn_record->mbr_idx1 = fh->flm_mtr_ids[0];
+ learn_record->mbr_idx2 = fh->flm_mtr_ids[1];
+ learn_record->mbr_idx3 = fh->flm_mtr_ids[2];
+ learn_record->mbr_idx4 = fh->flm_mtr_ids[3];
+
/* Last non-zero mtr is used for statistics */
uint8_t mbrs = 0;
+ while (mbrs < MAX_FLM_MTRS_SUPPORTED && fh->flm_mtr_ids[mbrs] != 0)
+ ++mbrs;
+
learn_record->vol_idx = mbrs;
learn_record->nat_ip = fh->flm_nat_ipv4;
@@ -628,6 +1041,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
uint32_t *num_dest_port,
uint32_t *num_queues)
{
+ int mtr_count = 0;
+
unsigned int encap_decap_order = 0;
uint64_t modify_field_use_flags = 0x0;
@@ -813,6 +1228,29 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_METER:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_METER", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_meter meter_tmp;
+ const struct rte_flow_action_meter *meter =
+ memcpy_mask_if(&meter_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_meter));
+
+ if (mtr_count >= MAX_FLM_MTRS_SUPPORTED) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Number of METER actions exceeds %d.",
+ MAX_FLM_MTRS_SUPPORTED);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->mtr_ids[mtr_count++] = meter->mtr_id;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
@@ -2530,6 +2968,13 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = fh->dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[fh->caller_id]->stats;
+ fh->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
switch (fd->l4_prot) {
case PROT_L4_TCP:
fh->flm_prot = 6;
@@ -3599,6 +4044,29 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (ndev->id_table_handle == NULL)
goto err_exit0;
+ ndev->flm_mtr_handle = calloc(1, sizeof(struct flm_flow_mtr_handle_s));
+ struct flm_mtr_shared_stats_s *flm_shared_stats =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *flm_stats =
+ calloc(FLM_MTR_STAT_SIZE, sizeof(struct flm_mtr_stat_s));
+
+ if (ndev->flm_mtr_handle == NULL || flm_shared_stats == NULL ||
+ flm_stats == NULL) {
+ free(ndev->flm_mtr_handle);
+ free(flm_shared_stats);
+ free(flm_stats);
+ goto err_exit0;
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ ((struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle)->port_stats[i] =
+ flm_shared_stats;
+ }
+
+ flm_shared_stats->stats = flm_stats;
+ flm_shared_stats->size = FLM_MTR_STAT_SIZE;
+ flm_shared_stats->shared = UINT8_MAX;
+
if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
goto err_exit0;
@@ -3633,6 +4101,18 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ free(ndev->flm_mtr_handle);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
@@ -4756,6 +5236,11 @@ int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
port_info->max_nb_aging_objects = dev->nb_aging_objects;
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle)
+ port_info->max_nb_meters = mtr_handle->port_stats[caller_id]->size;
+
return res;
}
@@ -4787,6 +5272,35 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
dev->nb_aging_objects = port_attr->nb_aging_objects;
}
+ if (port_attr->nb_meters > 0) {
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle->port_stats[caller_id]->shared == 1) {
+ res = realloc(mtr_handle->port_stats[caller_id]->stats,
+ port_attr->nb_meters) == NULL
+ ? -1
+ : 0;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+
+ } else {
+ mtr_handle->port_stats[caller_id] =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *stats =
+ calloc(port_attr->nb_meters, sizeof(struct flm_mtr_stat_s));
+
+ if (mtr_handle->port_stats[caller_id] == NULL || stats == NULL) {
+ free(mtr_handle->port_stats[caller_id]);
+ free(stats);
+ error->message = "Failed to allocate meter actions";
+ goto error_out;
+ }
+
+ mtr_handle->port_stats[caller_id]->stats = stats;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+ mtr_handle->port_stats[caller_id]->shared = 1;
+ }
+ }
+
return res;
error_out:
@@ -4826,8 +5340,18 @@ static const struct profile_inline_ops ops = {
/*
* NT Flow FLM Meter API
*/
+ .flow_mtr_supported = flow_mtr_supported,
+ .flow_mtr_meter_policy_n_max = flow_mtr_meter_policy_n_max,
+ .flow_mtr_set_profile = flow_mtr_set_profile,
+ .flow_mtr_set_policy = flow_mtr_set_policy,
+ .flow_mtr_create_meter = flow_mtr_create_meter,
+ .flow_mtr_probe_meter = flow_mtr_probe_meter,
+ .flow_mtr_destroy_meter = flow_mtr_destroy_meter,
+ .flm_mtr_adjust_stats = flm_mtr_adjust_stats,
+ .flow_mtr_meters_supported = flow_mtr_meters_supported,
.flm_setup_queues = flm_setup_queues,
.flm_free_queues = flm_free_queues,
+ .flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 15da911ca7..1e9dcd549f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -308,6 +308,33 @@ struct profile_inline_ops {
*/
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
+
+ /*
+ * NT Flow FLM Meter API
+ */
+ int (*flow_mtr_supported)(struct flow_eth_dev *dev);
+ uint64_t (*flow_mtr_meter_policy_n_max)(void);
+ int (*flow_mtr_set_profile)(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a,
+ uint64_t bucket_rate_b, uint64_t bucket_size_b);
+ int (*flow_mtr_set_policy)(struct flow_eth_dev *dev, uint32_t policy_id, int drop);
+ int (*flow_mtr_create_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t profile_id, uint32_t policy_id, uint64_t stats_mask);
+ int (*flow_mtr_probe_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id);
+ int (*flow_mtr_destroy_meter)(struct flow_eth_dev *dev, uint8_t caller_id,
+ uint32_t mtr_id);
+ int (*flm_mtr_adjust_stats)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value);
+ uint32_t (*flow_mtr_meters_supported)(struct flow_eth_dev *dev, uint8_t caller_id);
+
+ void (*flm_mtr_read_stats)(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear);
+
uint32_t (*flm_update)(struct flow_eth_dev *dev);
int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 72/73] net/ntnic: add meter module
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (70 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 71/73] net/ntnic: add meter API Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 73/73] net/ntnic: update meter documentation Serhii Iliushyk
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Meter module was added:
1. add/remove profile
2. create/destroy flow
3. add/remove meter policy
4. read/update stats
eth_dev_ops struct was extended with ops above.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/ntos_drv.h | 14 +
drivers/net/ntnic/meson.build | 2 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 11 +-
drivers/net/ntnic/ntnic_mod_reg.c | 21 +
drivers/net/ntnic/ntnic_mod_reg.h | 12 +
6 files changed, 542 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 7b3c8ff3d6..f6ce442d17 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -12,6 +12,7 @@
#include <inttypes.h>
#include <rte_ether.h>
+#include "rte_mtr.h"
#include "stream_binary_flow_api.h"
#include "nthw_drv.h"
@@ -90,6 +91,19 @@ struct __rte_cache_aligned ntnic_tx_queue {
enum fpga_info_profile profile; /* Inline / Capture */
};
+struct nt_mtr_profile {
+ LIST_ENTRY(nt_mtr_profile) next;
+ uint32_t profile_id;
+ struct rte_mtr_meter_profile profile;
+};
+
+struct nt_mtr {
+ LIST_ENTRY(nt_mtr) next;
+ uint32_t mtr_id;
+ int shared;
+ struct nt_mtr_profile *profile;
+};
+
struct pmd_internals {
const struct rte_pci_device *pci_dev;
struct flow_eth_dev *flw_dev;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 8c6d02a5ec..ca46541ef3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -17,6 +17,7 @@ includes = [
include_directories('nthw'),
include_directories('nthw/supported'),
include_directories('nthw/model'),
+ include_directories('nthw/ntnic_meter'),
include_directories('nthw/flow_filter'),
include_directories('nthw/flow_api'),
include_directories('nim/'),
@@ -92,6 +93,7 @@ sources = files(
'nthw/flow_filter/flow_nthw_tx_cpy.c',
'nthw/flow_filter/flow_nthw_tx_ins.c',
'nthw/flow_filter/flow_nthw_tx_rpl.c',
+ 'nthw/ntnic_meter/ntnic_meter.c',
'nthw/model/nthw_fpga_model.c',
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
new file mode 100644
index 0000000000..e4e8fe0c7d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -0,0 +1,483 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_meter.h>
+#include <rte_mtr.h>
+#include <rte_mtr_driver.h>
+#include <rte_malloc.h>
+
+#include "ntos_drv.h"
+#include "ntlog.h"
+#include "nt_util.h"
+#include "ntos_system.h"
+#include "ntnic_mod_reg.h"
+
+static inline uint8_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + (uint8_t)(port & 0x7f) + 1;
+}
+
+struct qos_integer_fractional {
+ uint32_t integer;
+ uint32_t fractional; /* 1/1024 */
+};
+
+/*
+ * Inline FLM metering
+ */
+
+static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
+ struct rte_mtr_capabilities *cap,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (!profile_inline_ops->flow_mtr_supported(internals->flw_dev)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Ethernet device does not support metering");
+ }
+
+ memset(cap, 0x0, sizeof(struct rte_mtr_capabilities));
+
+ /* MBR records use 28-bit integers */
+ cap->n_max = profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id);
+ cap->n_shared_max = cap->n_max;
+
+ cap->identical = 0;
+ cap->shared_identical = 0;
+
+ cap->shared_n_flows_per_mtr_max = UINT32_MAX;
+
+ /* Limited by number of MBR record ids per FLM learn record */
+ cap->chaining_n_mtrs_per_flow_max = 4;
+
+ cap->chaining_use_prev_mtr_color_supported = 0;
+ cap->chaining_use_prev_mtr_color_enforced = 0;
+
+ cap->meter_rate_max = (uint64_t)(0xfff << 0xf) * 1099;
+
+ cap->stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ /* Only color-blind mode is supported */
+ cap->color_aware_srtcm_rfc2697_supported = 0;
+ cap->color_aware_trtcm_rfc2698_supported = 0;
+ cap->color_aware_trtcm_rfc4115_supported = 0;
+
+ /* Focused on RFC2698 for now */
+ cap->meter_srtcm_rfc2697_n_max = 0;
+ cap->meter_trtcm_rfc2698_n_max = cap->n_max;
+ cap->meter_trtcm_rfc4115_n_max = 0;
+
+ cap->meter_policy_n_max = profile_inline_ops->flow_mtr_meter_policy_n_max();
+
+ /* Byte mode is supported */
+ cap->srtcm_rfc2697_byte_mode_supported = 0;
+ cap->trtcm_rfc2698_byte_mode_supported = 1;
+ cap->trtcm_rfc4115_byte_mode_supported = 0;
+
+ /* Packet mode not supported */
+ cap->srtcm_rfc2697_packet_mode_supported = 0;
+ cap->trtcm_rfc2698_packet_mode_supported = 0;
+ cap->trtcm_rfc4115_packet_mode_supported = 0;
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (profile->packet_mode != 0) {
+ return -rte_mtr_error_set(error, EINVAL,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_PACKET_MODE, NULL,
+ "Profile packet mode not supported");
+ }
+
+ if (profile->alg == RTE_MTR_SRTCM_RFC2697) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 2697 not supported");
+ }
+
+ if (profile->alg == RTE_MTR_TRTCM_RFC4115) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 4115 not supported");
+ }
+
+ if (profile->trtcm_rfc2698.cir != profile->trtcm_rfc2698.pir ||
+ profile->trtcm_rfc2698.cbs != profile->trtcm_rfc2698.pbs) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile committed and peak rates must be equal");
+ }
+
+ int res = profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id,
+ profile->trtcm_rfc2698.cir,
+ profile->trtcm_rfc2698.cbs, 0, 0);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile could not be added.");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id, 0, 0, 0, 0);
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t policy_id,
+ struct rte_mtr_meter_policy_params *policy,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ const struct rte_flow_action *actions = policy->actions[RTE_COLOR_GREEN];
+ int green_action_supported = (actions[0].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_VOID &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END);
+
+ actions = policy->actions[RTE_COLOR_YELLOW];
+ int yellow_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ actions = policy->actions[RTE_COLOR_RED];
+ int red_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ if (green_action_supported == 0 || yellow_action_supported == 0 ||
+ red_action_supported == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Unsupported meter policy actions");
+ }
+
+ if (profile_inline_ops->flow_mtr_set_policy(internals->flw_dev, policy_id, 1)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Policy could not be added");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_delete_inline(struct rte_eth_dev *eth_dev __rte_unused,
+ uint32_t policy_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ return 0;
+}
+
+static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (params->use_prev_mtr_color != 0 || params->dscp_table != NULL) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only color blind mode is supported");
+ }
+
+ uint64_t allowed_stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ if ((params->stats_mask & ~allowed_stats_mask) != 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Requested color stats not supported");
+ }
+
+ if (params->meter_enable == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Disabled meters not supported");
+ }
+
+ if (shared == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only shared mtrs are supported");
+ }
+
+ if (params->meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (params->meter_policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ int res = profile_inline_ops->flow_mtr_create_meter(internals->flw_dev,
+ caller_id,
+ mtr_id,
+ params->meter_profile_id,
+ params->meter_policy_id,
+ params->stats_mask);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_destroy_meter(internals->flw_dev, caller_id, mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ uint64_t adjust_value,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ const uint64_t adjust_bit = 1ULL << 63;
+ const uint64_t probe_bit = 1ULL << 62;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (adjust_value & adjust_bit) {
+ adjust_value &= adjust_bit - 1;
+
+ if (adjust_value > (uint64_t)UINT32_MAX) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "Adjust value is out of range");
+ }
+
+ if (profile_inline_ops->flm_mtr_adjust_stats(internals->flw_dev, caller_id, mtr_id,
+ (uint32_t)adjust_value)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to adjust offloaded MTR");
+ }
+
+ return 0;
+ }
+
+ if (adjust_value & probe_bit) {
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_probe_meter(internals->flw_dev, caller_id,
+ mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to offload to hardware");
+ }
+
+ return 0;
+ }
+
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Use of meter stats update requires bit 63 or bit 62 of \"stats_mask\" must be 1.");
+}
+
+static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ memset(stats, 0x0, sizeof(struct rte_mtr_stats));
+ profile_inline_ops->flm_mtr_read_stats(internals->flw_dev, caller_id, mtr_id, stats_mask,
+ &stats->n_pkts[RTE_COLOR_GREEN],
+ &stats->n_bytes[RTE_COLOR_GREEN], clear);
+
+ return 0;
+}
+
+/*
+ * Ops setup
+ */
+
+static const struct rte_mtr_ops mtr_ops_inline = {
+ .capabilities_get = eth_mtr_capabilities_get_inline,
+ .meter_profile_add = eth_mtr_meter_profile_add_inline,
+ .meter_profile_delete = eth_mtr_meter_profile_delete_inline,
+ .create = eth_mtr_create_inline,
+ .destroy = eth_mtr_destroy_inline,
+ .meter_policy_add = eth_mtr_meter_policy_add_inline,
+ .meter_policy_delete = eth_mtr_meter_policy_delete_inline,
+ .stats_update = eth_mtr_stats_adjust_inline,
+ .stats_read = eth_mtr_stats_read_inline,
+};
+
+static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
+ enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
+
+ switch (profile) {
+ case FPGA_INFO_PROFILE_INLINE:
+ *(const struct rte_mtr_ops **)ops = &mtr_ops_inline;
+ break;
+
+ case FPGA_INFO_PROFILE_UNKNOWN:
+
+ /* fallthrough */
+ case FPGA_INFO_PROFILE_CAPTURE:
+
+ /* fallthrough */
+ default:
+ NT_LOG(ERR, NTHW, "" PCIIDENT_PRINT_STR ": fpga profile not supported",
+ PCIIDENT_TO_DOMAIN(p_nt_drv->pciident),
+ PCIIDENT_TO_BUSNR(p_nt_drv->pciident),
+ PCIIDENT_TO_DEVNR(p_nt_drv->pciident),
+ PCIIDENT_TO_FUNCNR(p_nt_drv->pciident));
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct meter_ops_s meter_ops = {
+ .eth_mtr_ops_get = eth_mtr_ops_get,
+};
+
+void meter_init(void)
+{
+ NT_LOG(DBG, NTNIC, "Meter ops initialized");
+ register_meter_ops(&meter_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index eca67dbd62..e53882b343 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1684,7 +1684,7 @@ static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_con
return 0;
}
-static const struct eth_dev_ops nthw_eth_dev_ops = {
+static struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
.dev_stop = eth_dev_stop,
@@ -1707,6 +1707,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .mtr_ops_get = NULL,
.flow_ops_get = dev_flow_ops_get,
.xstats_get = eth_xstats_get,
.xstats_get_names = eth_xstats_get_names,
@@ -2170,6 +2171,14 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ const struct meter_ops_s *meter_ops = get_meter_ops();
+
+ if (meter_ops != NULL)
+ nthw_eth_dev_ops.mtr_ops_get = meter_ops->eth_mtr_ops_get;
+
+ else
+ NT_LOG(DBG, NTNIC, "Meter module is not initialized");
+
/* Initialize the queue system */
if (err == 0) {
sg_ops = get_sg_ops();
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 6737d18a6f..10aa778a57 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,27 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+/*
+ *
+ */
+static struct meter_ops_s *meter_ops;
+
+void register_meter_ops(struct meter_ops_s *ops)
+{
+ meter_ops = ops;
+}
+
+const struct meter_ops_s *get_meter_ops(void)
+{
+ if (meter_ops == NULL)
+ meter_init();
+
+ return meter_ops;
+}
+
+/*
+ *
+ */
static const struct ntnic_filter_ops *ntnic_filter_ops;
void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1e9dcd549f..3fbbee6490 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -115,6 +115,18 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+/* Meter ops section */
+struct meter_ops_s {
+ int (*eth_mtr_ops_get)(struct rte_eth_dev *eth_dev, void *ops);
+};
+
+void register_meter_ops(struct meter_ops_s *ops);
+const struct meter_ops_s *get_meter_ops(void);
+void meter_init(void);
+
+/*
+ *
+ */
struct ntnic_filter_ops {
int (*poll_statistics)(struct pmd_internals *internals);
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v3 73/73] net/ntnic: update meter documentation
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
` (71 preceding siblings ...)
2024-10-23 17:00 ` [PATCH v3 72/73] net/ntnic: add meter module Serhii Iliushyk
@ 2024-10-23 17:00 ` Serhii Iliushyk
72 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-23 17:00 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Update ntnic.ini ntnic.rst and release notes
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 3 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index af2981ccf6..0e58c2ca42 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -43,3 +43,4 @@ queue = Y
raw_decap = Y
raw_encap = Y
rss = Y
+meter = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e5a8d71892..4ae94b161c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -70,6 +70,7 @@ Features
- Exact match of 140 million flows and policies.
- Basic stats
- Extended stats
+- Flow metering, including meter policy API.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index b449b01dc8..1124d5a64c 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -163,6 +163,7 @@ New Features
* Added flow handling API
* Added statistics API
* Added age rte flow action support
+ * Added meter flow metering and flow policy support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v3 53/73] net/ntnic: enable RSS feature
2024-10-23 17:00 ` [PATCH v3 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-28 16:15 ` Stephen Hemminger
0 siblings, 0 replies; 405+ messages in thread
From: Stephen Hemminger @ 2024-10-28 16:15 UTC (permalink / raw)
To: Serhii Iliushyk; +Cc: dev, mko-plv, ckm, andrew.rybchenko, ferruh.yigit
On Wed, 23 Oct 2024 19:00:01 +0200
Serhii Iliushyk <sil-plv@napatech.com> wrote:
> +
> + rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
Avoid use of rte_memcpy(), it has less checking that memcpy().
The only place it can make sense is in the hot data path where compiler is more
conservative about alignment.
> + rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
>
Use structure assignment instead of memcpy() to keep type checking.
> + if (rss_conf->rss_key != NULL) {
> + int key_len = rss_conf->rss_key_len < MAX_RSS_KEY_LEN ? rss_conf->rss_key_len
> + : MAX_RSS_KEY_LEN;
Use RTE_MIN() and the key_len variable should not be signed.
> + memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
> + rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
> +static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
> +{
> + const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
> +
> + if (flow_filter_ops == NULL) {
> + NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
> + return -1;
> + }
> +
> + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
Since dev_private is void *, no need for the cast here (in C code).
> +
> + struct flow_nic_dev *ndev = internals->flw_dev->ndev;
> + struct nt_eth_rss_conf tmp_rss_conf = { 0 };
> + const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
> + int res = 0;
> +
> + if (rss_conf->rss_key != NULL) {
> + if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
> + NT_LOG(ERR, NTNIC,
> + "ERROR: - RSS hash key length %u exceeds maximum value %u",
> + rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
> + return -1;
> + }
> +
> + rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
> + }
> +
> + tmp_rss_conf.algorithm = rss_conf->algorithm;
> +
> + tmp_rss_conf.rss_hf = rss_conf->rss_hf;
> + res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
In general, this code is good about moving declarations next to first use.
But here res is initialized to 0 but then set again from hasher_fields.
Why not just move the declaration there.
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 00/86] Provide flow filter API and statistics
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (74 preceding siblings ...)
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
` (86 more replies)
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
76 siblings, 87 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The list of updates provided by the patchset:
- FW version
- Speed capabilities
- Link status (Link update only)
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
- Multiple TX and RX queues.
- Scattered and gather for TX and RX.
- RSS hash
- RSS key update
- RSS based on VLAN or 5-tuple.
- RSS using different combinations of fields: L3 only, L4 only or both, and
source only, destination only or both.
- Several RSS hash keys, one for each flow type.
- Default RSS operation with no hash key specification.
- VLAN filtering.
- RX VLAN stripping via raw decap.
- TX VLAN insertion via raw encap.
- Flow API.
- Multiple process.
- Tunnel types: GTP.
- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
verification.
- Support for multiple rte_flow groups.
- Encapsulation and decapsulation of GTP data.
- Packet modification: NAT, TTL decrement, DSCP tagging
- Traffic mirroring.
- Jumbo frame support.
- Port and queue statistics.
- RMON statistics in extended stats.
- Flow metering, including meter policy API.
- Link state information.
- CAM and TCAM based matching.
- Exact match of 140 million flows and policies.
- Basic stats
- Extended stats
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
- Asynchronous flow API
- MTU update
Update: the pthread API was replaced with RTE spinlock in the separate patch.
Danylo Vodopianov (43):
net/ntnic: add API for configuration NT flow dev
net/ntnic: add item UDP
net/ntnic: add action TCP
net/ntnic: add action VLAN
net/ntnic: add item SCTP
net/ntnic: add items IPv6 and ICMPv6
net/ntnic: add action modify filed
net/ntnic: add items gtp and actions raw encap/decap
net/ntnic: add cat module
net/ntnic: add SLC LR module
net/ntnic: add PDB module
net/ntnic: add QSL module
net/ntnic: add KM module
net/ntnic: add hash API
net/ntnic: add TPE module
net/ntnic: add FLM module
net/ntnic: add flm rcp module
net/ntnic: add learn flow queue handling
net/ntnic: match and action db attributes were added
net/ntnic: add statistics API
net/ntnic: add rpf module
net/ntnic: add statistics poll
net/ntnic: added flm stat interface
net/ntnic: add tsm module
net/ntnic: add xstats
net/ntnic: added flow statistics
net/ntnic: add scrub registers
net/ntnic: add flow aging API
net/ntnic: add aging API to the inline profile
net/ntnic: add flow info and flow configure APIs
net/ntnic: add flow aging event
net/ntnic: add termination thread
net/ntnic: add aging documentation
net/ntnic: add meter API
net/ntnic: add meter module
net/ntnic: update meter documentation
net/ntnic: add action update
net/ntnic: add flow action update
net/ntnic: flow update was added
net/ntnic: add async create/destroy API declaration
net/ntnic: add async template API declaration
net/ntnic: add async flow create/delete API implementation
net/ntnic: add async template APIs implementation
Oleksandr Kolomeiets (18):
net/ntnic: add flow dump feature
net/ntnic: add flow flush
net/ntnic: sort FPGA registers alphanumerically
net/ntnic: add CSU module registers
net/ntnic: add FLM module registers
net/ntnic: add HFU module registers
net/ntnic: add IFR module registers
net/ntnic: add MAC Rx module registers
net/ntnic: add MAC Tx module registers
net/ntnic: add RPP LR module registers
net/ntnic: add SLC LR module registers
net/ntnic: add Tx CPY module registers
net/ntnic: add Tx INS module registers
net/ntnic: add Tx RPL module registers
net/ntnic: add STA module
net/ntnic: add TSM module
net/ntnic: update documentation
net/ntnic: add MTU configuration
Serhii Iliushyk (25):
net/ntnic: add flow filter API
net/ntnic: add minimal create/destroy flow operations
net/ntnic: add internal flow create/destroy API
net/ntnic: add minimal NT flow inline profile
net/ntnic: add management API for NT flow profile
net/ntnic: add NT flow profile management implementation
net/ntnic: add create/destroy implementation for NT flows
net/ntnic: add infrastructure for for flow actions and items
net/ntnic: add action queue
net/ntnic: add action mark
net/ntnic: add ation jump
net/ntnic: add action drop
net/ntnic: add item eth
net/ntnic: add item IPv4
net/ntnic: add item ICMP
net/ntnic: add item port ID
net/ntnic: add item void
net/ntnic: add GMF (Generic MAC Feeder) module
net/ntnic: update alignment for virt queue structs
net/ntnic: enable RSS feature
net/ntnic: update documentation for flow actions update
net/ntnic: migrate to the RTE spinlock
net/ntnic: remove unnecessary type cast
net/ntnic: update async flow API documentation
net/ntnic: update documentation for set MTU
doc/guides/nics/features/ntnic.ini | 33 +
doc/guides/nics/ntnic.rst | 52 +
doc/guides/rel_notes/release_24_11.rst | 7 +
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 598 ++
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 +-
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 73 +
drivers/net/ntnic/include/flow_api.h | 142 +-
drivers/net/ntnic/include/flow_api_engine.h | 380 +
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 256 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 5 +
drivers/net/ntnic/include/ntnic_stat.h | 265 +
drivers/net/ntnic/include/ntos_drv.h | 24 +
.../ntnic/include/stream_binary_flow_api.h | 67 +
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 +
drivers/net/ntnic/meson.build | 20 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +
.../net/ntnic/nthw/core/include/nthw_i2cm.h | 4 +-
.../net/ntnic/nthw/core/include/nthw_rmc.h | 6 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 49 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 +
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 30 +
drivers/net/ntnic/nthw/core/nthw_rpf.c | 120 +
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 935 ++-
drivers/net/ntnic/nthw/flow_api/flow_group.c | 99 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 145 +
.../net/ntnic/nthw/flow_api/flow_id_table.h | 26 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1171 ++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 457 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 723 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 853 +++
.../flow_api/profile_inline/flm_age_queue.c | 164 +
.../flow_api/profile_inline/flm_age_queue.h | 42 +
.../flow_api/profile_inline/flm_evt_queue.c | 293 +
.../flow_api/profile_inline/flm_evt_queue.h | 55 +
.../flow_api/profile_inline/flm_lrn_queue.c | 70 +
.../flow_api/profile_inline/flm_lrn_queue.h | 25 +
.../profile_inline/flow_api_hw_db_inline.c | 3000 ++++++++
.../profile_inline/flow_api_hw_db_inline.h | 394 ++
.../profile_inline/flow_api_profile_inline.c | 6086 +++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 132 +
.../flow_api_profile_inline_config.h | 127 +
.../ntnic/nthw/flow_filter/flow_nthw_flm.c | 47 +-
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 +
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
drivers/net/ntnic/nthw/nthw_rac.c | 38 +-
drivers/net/ntnic/nthw/nthw_rac.h | 2 +-
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 498 ++
.../supported/nthw_fpga_9563_055_049_0000.c | 3317 ++++++---
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 11 +-
.../nthw/supported/nthw_fpga_mod_str_map.c | 2 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 5 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 48 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 205 +
drivers/net/ntnic/ntnic_ethdev.c | 813 ++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 1348 ++++
drivers/net/ntnic/ntnic_mod_reg.c | 111 +
drivers/net/ntnic/ntnic_mod_reg.h | 331 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 +++
drivers/net/ntnic/ntutil/nt_util.h | 12 +
79 files changed, 25772 insertions(+), 1109 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-30 1:54 ` Ferruh Yigit
2024-10-29 16:41 ` [PATCH v4 02/86] net/ntnic: add flow filter API Serhii Iliushyk
` (85 subsequent siblings)
86 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
This API allows to enable of flow profile for NT SmartNIC
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 30 +++
drivers/net/ntnic/include/flow_api_engine.h | 5 +
drivers/net/ntnic/include/ntos_drv.h | 1 +
.../ntnic/include/stream_binary_flow_api.h | 9 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 221 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 22 ++
drivers/net/ntnic/ntnic_mod_reg.c | 5 +
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++
8 files changed, 307 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 984450afdc..c80906ec50 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -34,6 +34,8 @@ struct flow_eth_dev {
struct flow_nic_dev *ndev;
/* NIC port id */
uint8_t port;
+ /* App assigned port_id - may be DPDK port_id */
+ uint32_t port_id;
/* 0th for exception */
struct flow_queue_id_s rx_queue[FLOW_MAX_QUEUES + 1];
@@ -41,6 +43,9 @@ struct flow_eth_dev {
/* VSWITCH has exceptions sent on queue 0 per design */
int num_queues;
+ /* QSL_HSH index if RSS needed QSL v6+ */
+ int rss_target_id;
+
struct flow_eth_dev *next;
};
@@ -48,6 +53,8 @@ struct flow_eth_dev {
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
uint16_t ports; /* number of in-ports addressable on this NIC */
+ /* flow profile this NIC is initially prepared for */
+ enum flow_eth_dev_profile flow_profile;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
@@ -73,6 +80,14 @@ struct flow_nic_dev {
extern const char *dbg_res_descr[];
+#define flow_nic_set_bit(arr, x) \
+ do { \
+ uint8_t *_temp_arr = (arr); \
+ size_t _temp_x = (x); \
+ _temp_arr[_temp_x / 8] = \
+ (uint8_t)(_temp_arr[_temp_x / 8] | (uint8_t)(1 << (_temp_x % 8))); \
+ } while (0)
+
#define flow_nic_unset_bit(arr, x) \
do { \
size_t _temp_x = (x); \
@@ -85,6 +100,18 @@ extern const char *dbg_res_descr[];
(arr[_temp_x / 8] & (uint8_t)(1 << (_temp_x % 8))); \
})
+#define flow_nic_mark_resource_used(_ndev, res_type, index) \
+ do { \
+ struct flow_nic_dev *_temp_ndev = (_ndev); \
+ typeof(res_type) _temp_res_type = (res_type); \
+ size_t _temp_index = (index); \
+ NT_LOG(DBG, FILTER, "mark resource used: %s idx %zu", \
+ dbg_res_descr[_temp_res_type], _temp_index); \
+ assert(flow_nic_is_bit_set(_temp_ndev->res[_temp_res_type].alloc_bm, \
+ _temp_index) == 0); \
+ flow_nic_set_bit(_temp_ndev->res[_temp_res_type].alloc_bm, _temp_index); \
+ } while (0)
+
#define flow_nic_mark_resource_unused(_ndev, res_type, index) \
do { \
typeof(res_type) _temp_res_type = (res_type); \
@@ -97,6 +124,9 @@ extern const char *dbg_res_descr[];
#define flow_nic_is_resource_used(_ndev, res_type, index) \
(!!flow_nic_is_bit_set((_ndev)->res[res_type].alloc_bm, index))
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment);
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index db5e6fe09d..d025677e25 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -41,6 +41,11 @@ enum res_type_e {
RES_INVALID
};
+/*
+ * Flow NIC offload management
+ */
+#define MAX_OUTPUT_DEST (128)
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index d51d1e3677..8fd577dfe3 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -86,6 +86,7 @@ struct __rte_cache_aligned ntnic_tx_queue {
struct pmd_internals {
const struct rte_pci_device *pci_dev;
+ struct flow_eth_dev *flw_dev;
char name[20];
int n_intf_no;
int lpbk_mode;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 10529b8843..47e5353344 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,11 +12,20 @@
#define FLOW_MAX_QUEUES 128
+/*
+ * Flow eth dev profile determines how the FPGA module resources are
+ * managed and what features are available
+ */
+enum flow_eth_dev_profile {
+ FLOW_ETH_DEV_PROFILE_INLINE = 0,
+};
+
struct flow_queue_id_s {
int id;
int hw_id;
};
struct flow_eth_dev; /* port device */
+struct flow_handle;
#endif /* _STREAM_BINARY_FLOW_API_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34e84559eb..f49aca79c1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_nic_setup.h"
#include "ntnic_mod_reg.h"
+#include "flow_api.h"
#include "flow_filter.h"
const char *dbg_res_descr[] = {
@@ -35,6 +36,24 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Resources
+ */
+
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment)
+{
+ for (unsigned int i = 0; i < ndev->res[res_type].resource_count; i += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, i)) {
+ flow_nic_mark_resource_used(ndev, res_type, i);
+ ndev->res[res_type].ref[i] = 1;
+ return i;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
@@ -55,10 +74,60 @@ int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return !!ndev->res[res_type].ref[index];/* if 0 resource has been freed */
}
+/*
+ * Nic port/adapter lookup
+ */
+
+static struct flow_eth_dev *nic_and_port_to_eth_dev(uint8_t adapter_no, uint8_t port)
+{
+ struct flow_nic_dev *nic_dev = dev_base;
+
+ while (nic_dev) {
+ if (nic_dev->adapter_no == adapter_no)
+ break;
+
+ nic_dev = nic_dev->next;
+ }
+
+ if (!nic_dev)
+ return NULL;
+
+ struct flow_eth_dev *dev = nic_dev->eth_base;
+
+ while (dev) {
+ if (port == dev->port)
+ return dev;
+
+ dev = dev->next;
+ }
+
+ return NULL;
+}
+
+static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
+{
+ struct flow_nic_dev *ndev = dev_base;
+
+ while (ndev) {
+ if (adapter_no == ndev->adapter_no)
+ break;
+
+ ndev = ndev->next;
+ }
+
+ return ndev;
+}
+
/*
* Device Management API
*/
+static void nic_insert_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *dev)
+{
+ dev->next = ndev->eth_base;
+ ndev->eth_base = dev;
+}
+
static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *eth_dev)
{
struct flow_eth_dev *dev = ndev->eth_base, *prev = NULL;
@@ -242,6 +311,154 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
return -1;
}
+/*
+ * adapter_no physical adapter no
+ * port_no local port no
+ * alloc_rx_queues number of rx-queues to allocate for this eth_dev
+ */
+static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no, uint32_t port_id,
+ int alloc_rx_queues, struct flow_queue_id_s queue_ids[],
+ int *rss_target_id, enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+
+ int i;
+ struct flow_eth_dev *eth_dev = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "Get eth-port adapter %i, port %i, port_id %u, rx queues %i, profile %i",
+ adapter_no, port_no, port_id, alloc_rx_queues, flow_profile);
+
+ if (MAX_OUTPUT_DEST < FLOW_MAX_QUEUES) {
+ assert(0);
+ NT_LOG(ERR, FILTER,
+ "ERROR: Internal array for multiple queues too small for API");
+ }
+
+ pthread_mutex_lock(&base_mtx);
+ struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
+
+ if (!ndev) {
+ /* Error - no flow api found on specified adapter */
+ NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
+ adapter_no);
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if (ndev->ports < ((uint16_t)port_no + 1)) {
+ NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
+ NT_LOG(ERR, FILTER,
+ "ERROR: Exceeds supported number of rx queues per eth device");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ /* don't accept multiple eth_dev's on same NIC and same port */
+ eth_dev = nic_and_port_to_eth_dev(adapter_no, port_no);
+
+ if (eth_dev) {
+ NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
+ adapter_no, port_no);
+ pthread_mutex_unlock(&base_mtx);
+ flow_delete_eth_dev(eth_dev);
+ eth_dev = NULL;
+ }
+
+ eth_dev = calloc(1, sizeof(struct flow_eth_dev));
+
+ if (!eth_dev) {
+ NT_LOG(ERR, FILTER, "ERROR: calloc failed");
+ goto err_exit1;
+ }
+
+ pthread_mutex_lock(&ndev->mtx);
+
+ eth_dev->ndev = ndev;
+ eth_dev->port = port_no;
+ eth_dev->port_id = port_id;
+
+ /* Allocate the requested queues in HW for this dev */
+
+ for (i = 0; i < alloc_rx_queues; i++) {
+#ifdef SCATTER_GATHER
+ eth_dev->rx_queue[i] = queue_ids[i];
+#else
+ int queue_id = flow_nic_alloc_resource(ndev, RES_QUEUE, 1);
+
+ if (queue_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: no more free queue IDs in NIC");
+ goto err_exit0;
+ }
+
+ eth_dev->rx_queue[eth_dev->num_queues].id = (uint8_t)queue_id;
+ eth_dev->rx_queue[eth_dev->num_queues].hw_id =
+ ndev->be.iface->alloc_rx_queue(ndev->be.be_dev,
+ eth_dev->rx_queue[eth_dev->num_queues].id);
+
+ if (eth_dev->rx_queue[eth_dev->num_queues].hw_id < 0) {
+ NT_LOG(ERR, FILTER, "ERROR: could not allocate a new queue");
+ goto err_exit0;
+ }
+
+ if (queue_ids)
+ queue_ids[eth_dev->num_queues] = eth_dev->rx_queue[eth_dev->num_queues];
+#endif
+
+ if (i == 0 && (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE && exception_path)) {
+ /*
+ * Init QSL UNM - unmatched - redirects otherwise discarded
+ * packets in QSL
+ */
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_DEST_QUEUE, eth_dev->port,
+ eth_dev->rx_queue[0].hw_id) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1) < 0)
+ goto err_exit0;
+ }
+
+ eth_dev->num_queues++;
+ }
+
+ eth_dev->rss_target_id = -1;
+
+ *rss_target_id = eth_dev->rss_target_id;
+
+ nic_insert_eth_port_dev(ndev, eth_dev);
+
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+ return eth_dev;
+
+err_exit0:
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+
+err_exit1:
+ if (eth_dev)
+ free(eth_dev);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ NT_LOG(DBG, FILTER, "ERR in %s", __func__);
+ return NULL; /* Error exit */
+}
+
struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_backend_ops *be_if,
void *be_dev)
{
@@ -383,6 +600,10 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
+ /*
+ * Device Management API
+ */
+ .flow_get_eth_dev = flow_get_eth_dev,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bff893ec7a..510c0e5d23 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1355,6 +1355,13 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1378,10 +1385,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
uint32_t n_port_mask = -1; /* All ports enabled by default */
uint32_t nb_rx_queues = 1;
uint32_t nb_tx_queues = 1;
+ uint32_t exception_path = 0;
struct flow_queue_id_s queue_ids[MAX_QUEUES];
int n_phy_ports;
struct port_link_speed pls_mbps[NUM_ADAPTER_PORTS_MAX] = { 0 };
int num_port_speeds = 0;
+ enum flow_eth_dev_profile profile = FLOW_ETH_DEV_PROFILE_INLINE;
+
NT_LOG_DBGX(DBG, NTNIC, "Dev %s PF #%i Init : %02x:%02x:%i", pci_dev->name,
pci_dev->addr.function, pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
@@ -1681,6 +1691,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (flow_filter_ops != NULL) {
+ internals->flw_dev = flow_filter_ops->flow_get_eth_dev(0, n_intf_no,
+ eth_dev->data->port_id, nb_rx_queues, queue_ids,
+ &internals->txq_scg[0].rss_target_id, profile, exception_path);
+
+ if (!internals->flw_dev) {
+ NT_LOG(ERR, NTNIC,
+ "Error creating port. Resource exhaustion in HW");
+ return -1;
+ }
+ }
+
/* connect structs */
internals->p_drv = p_drv;
eth_dev->data->dev_private = internals;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index a03c97801b..ac8afdef6a 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,6 +118,11 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+const struct profile_inline_ops *get_profile_inline_ops(void)
+{
+ return NULL;
+}
+
static const struct flow_filter_ops *flow_filter_ops;
void register_flow_filter_ops(const struct flow_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 5b97b3d8ac..017d15d7bc 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include "flow_api.h"
+#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
#include "nthw_platform_drv.h"
#include "nthw_drv.h"
@@ -223,10 +224,23 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+const struct profile_inline_ops *get_profile_inline_ops(void);
+
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
int adapter_no);
int (*flow_filter_done)(struct flow_nic_dev *dev);
+ /*
+ * Device Management API
+ */
+ struct flow_eth_dev *(*flow_get_eth_dev)(uint8_t adapter_no,
+ uint8_t hw_port_no,
+ uint32_t port_id,
+ int alloc_rx_queues,
+ struct flow_queue_id_s queue_ids[],
+ int *rss_target_id,
+ enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path);
};
void register_flow_filter_ops(const struct flow_filter_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 02/86] net/ntnic: add flow filter API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 03/86] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
` (84 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable flow ops getter
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 13 +++++++
.../ntnic/include/stream_binary_flow_api.h | 2 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 7 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 37 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +++
7 files changed, 80 insertions(+)
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
new file mode 100644
index 0000000000..802e6dcbe1
--- /dev/null
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -0,0 +1,13 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __CREATE_ELEMENTS_H__
+#define __CREATE_ELEMENTS_H__
+
+
+#include "stream_binary_flow_api.h"
+#include <rte_flow.h>
+
+#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 47e5353344..a6244d4082 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,8 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include "rte_flow.h"
+#include "rte_flow_driver.h"
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 3d9566a52e..d272c73c62 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -79,6 +79,7 @@ sources = files(
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
'ntlog/ntlog.c',
+ 'ntnic_filter/ntnic_filter.c',
'ntutil/nt_util.c',
'ntnic_mod_reg.c',
'ntnic_vfio.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 510c0e5d23..a509a8eb51 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1321,6 +1321,12 @@ eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size
}
}
+static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_ops **ops)
+{
+ *ops = get_dev_flow_ops();
+ return 0;
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1349,6 +1355,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
};
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
new file mode 100644
index 0000000000..445139abc9
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -0,0 +1,37 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_flow_driver.h>
+#include "ntnic_mod_reg.h"
+
+static int
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ int res = 0;
+
+ return res;
+}
+
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ const struct rte_flow_item items[] __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct rte_flow *flow = NULL;
+
+ return flow;
+}
+
+static const struct rte_flow_ops dev_flow_ops = {
+ .create = eth_flow_create,
+ .destroy = eth_flow_destroy,
+};
+
+void dev_flow_init(void)
+{
+ register_dev_flow_ops(&dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ac8afdef6a..ad2266116f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -137,3 +137,18 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+
+static const struct rte_flow_ops *dev_flow_ops;
+
+void register_dev_flow_ops(const struct rte_flow_ops *ops)
+{
+ dev_flow_ops = ops;
+}
+
+const struct rte_flow_ops *get_dev_flow_ops(void)
+{
+ if (dev_flow_ops == NULL)
+ dev_flow_init();
+
+ return dev_flow_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 017d15d7bc..457dc58794 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -15,6 +15,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nthw_fpga_rst_nt200a0x.h"
#include "ntnic_virt_queue.h"
+#include "create_elements.h"
/* sg ops section */
struct sg_ops_s {
@@ -243,6 +244,10 @@ struct flow_filter_ops {
uint32_t exception_path);
};
+void register_dev_flow_ops(const struct rte_flow_ops *ops);
+const struct rte_flow_ops *get_dev_flow_ops(void);
+void dev_flow_init(void);
+
void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 03/86] net/ntnic: add minimal create/destroy flow operations
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 02/86] net/ntnic: add flow filter API Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 04/86] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
` (83 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add high level API with describes base create/destroy implementation
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 51 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 227 +++++++++++++++++-
drivers/net/ntnic/ntutil/nt_util.h | 3 +
3 files changed, 274 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 802e6dcbe1..179542d2b2 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -6,8 +6,59 @@
#ifndef __CREATE_ELEMENTS_H__
#define __CREATE_ELEMENTS_H__
+#include "stdint.h"
#include "stream_binary_flow_api.h"
#include <rte_flow.h>
+#define MAX_ELEMENTS 64
+#define MAX_ACTIONS 32
+
+struct cnv_match_s {
+ struct rte_flow_item rte_flow_item[MAX_ELEMENTS];
+};
+
+struct cnv_attr_s {
+ struct cnv_match_s match;
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
+};
+
+struct cnv_action_s {
+ struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_queue queue;
+};
+
+/*
+ * Only needed because it eases the use of statistics through NTAPI
+ * for faster integration into NTAPI version of driver
+ * Therefore, this is only a good idea when running on a temporary NTAPI
+ * The query() functionality must go to flow engine, when moved to Open Source driver
+ */
+
+struct rte_flow {
+ void *flw_hdl;
+ int used;
+
+ uint32_t flow_stat_id;
+
+ uint16_t caller_id;
+};
+
+enum nt_rte_flow_item_type {
+ NT_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
+ NT_RTE_FLOW_ITEM_TYPE_TUNNEL,
+};
+
+extern rte_spinlock_t flow_lock;
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem);
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset);
+
#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 445139abc9..74cf360da0 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,24 +4,237 @@
*/
#include <rte_flow_driver.h>
+#include "nt_util.h"
+#include "create_elements.h"
#include "ntnic_mod_reg.h"
+#include "ntos_system.h"
+
+#define MAX_RTE_FLOWS 8192
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
+{
+ if (error) {
+ error->cause = NULL;
+ error->message = rte_flow_error->message;
+
+ if (rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE ||
+ rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE)
+ error->type = RTE_FLOW_ERROR_TYPE_NONE;
+
+ else
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+
+ return 0;
+}
+
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr)
+{
+ memset(&attribute->attr, 0x0, sizeof(struct rte_flow_attr));
+
+ if (attr) {
+ attribute->attr.group = attr->group;
+ attribute->attr.priority = attr->priority;
+ }
+
+ return 0;
+}
+
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem)
+{
+ int eidx = 0;
+ int iter_idx = 0;
+ int type = -1;
+
+ if (!items) {
+ NT_LOG(ERR, FILTER, "ERROR no items to iterate!");
+ return -1;
+ }
+
+ do {
+ type = items[iter_idx].type;
+
+ if (type < 0) {
+ if ((int)items[iter_idx].type == NT_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ type = NT_RTE_FLOW_ITEM_TYPE_TUNNEL;
+
+ } else {
+ NT_LOG(ERR, FILTER, "ERROR unknown item type received!");
+ return -1;
+ }
+ }
+
+ if (type >= 0) {
+ if (items[iter_idx].last) {
+ /* Ranges are not supported yet */
+ NT_LOG(ERR, FILTER, "ERROR ITEM-RANGE SETUP - NOT SUPPORTED!");
+ return -1;
+ }
+
+ if (eidx == max_elem) {
+ NT_LOG(ERR, FILTER, "ERROR TOO MANY ELEMENTS ENCOUNTERED!");
+ return -1;
+ }
+
+ match->rte_flow_item[eidx].type = type;
+ match->rte_flow_item[eidx].spec = items[iter_idx].spec;
+ match->rte_flow_item[eidx].mask = items[iter_idx].mask;
+
+ eidx++;
+ iter_idx++;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
+ return (type >= 0) ? 0 : -1;
+}
+
+int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ int max_elem __rte_unused,
+ uint32_t queue_offset __rte_unused)
+{
+ int type = -1;
+
+ return (type >= 0) ? 0 : -1;
+}
+
+static inline uint16_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + port + 1;
+}
+
+static int convert_flow(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct cnv_attr_s *attribute,
+ struct cnv_match_s *match,
+ struct cnv_action_s *action,
+ struct rte_flow_error *error)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t queue_offset = 0;
+
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!internals) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Missing eth_dev");
+ return -1;
+ }
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = internals->vpq[0].id;
+ }
+
+ if (create_attr(attribute, attr) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, "Error in attr");
+ return -1;
+ }
+
+ if (create_match_elements(match, items, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in items");
+ return -1;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ if (create_action_elements_inline(action, actions,
+ MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return -1;
+ }
+
+ return 0;
+}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
- struct rte_flow_error *error __rte_unused)
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!flow)
+ return 0;
return res;
}
-static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_attr *attr __rte_unused,
- const struct rte_flow_item items[] __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ struct cnv_attr_s attribute = { 0 };
+ struct cnv_match_s match = { 0 };
+ struct cnv_action_s action = { 0 };
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t flow_stat_id = 0;
+
+ if (convert_flow(eth_dev, attr, items, actions, &attribute, &match, &action, error) < 0)
+ return NULL;
+
+ /* Main application caller_id is port_id shifted above VF ports */
+ attribute.caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ convert_error(error, &flow_error);
+ return (struct rte_flow *)NULL;
+ }
+
struct rte_flow *flow = NULL;
+ rte_spinlock_lock(&flow_lock);
+ int i;
+
+ for (i = 0; i < MAX_RTE_FLOWS; i++) {
+ if (!nt_flows[i].used) {
+ nt_flows[i].flow_stat_id = flow_stat_id;
+
+ if (nt_flows[i].flow_stat_id < NT_MAX_COLOR_FLOW_STATS) {
+ nt_flows[i].used = 1;
+ flow = &nt_flows[i];
+ }
+
+ break;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
return flow;
}
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 64947f5fbf..71ecd6c68c 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -9,6 +9,9 @@
#include <stdint.h>
#include "nt4ga_link.h"
+/* Total max VDPA ports */
+#define MAX_VDPA_PORTS 128UL
+
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 04/86] net/ntnic: add internal flow create/destroy API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (2 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 03/86] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
` (82 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
NT-specific flow filter API for creating/destroying a flow
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 ++++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++++
3 files changed, 116 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index f49aca79c1..d779dc481f 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -117,6 +117,40 @@ static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
return ndev;
}
+/*
+ * Flow API
+ */
+
+static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ uint16_t forced_vlan_vid __rte_unused,
+ uint16_t caller_id __rte_unused,
+ const struct rte_flow_item item[] __rte_unused,
+ const struct rte_flow_action action[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return NULL;
+ }
+
+ return NULL;
+}
+
+static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
+ struct flow_handle *flow __rte_unused, struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return -1;
+}
/*
* Device Management API
@@ -604,6 +638,11 @@ static const struct flow_filter_ops ops = {
* Device Management API
*/
.flow_get_eth_dev = flow_get_eth_dev,
+ /*
+ * NT Flow API
+ */
+ .flow_create = flow_create,
+ .flow_destroy = flow_destroy,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 74cf360da0..b9d723c9dd 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -110,6 +110,13 @@ static inline uint16_t get_caller_id(uint16_t port)
return MAX_VDPA_PORTS + port + 1;
}
+static int is_flow_handle_typecast(struct rte_flow *flow)
+{
+ const void *first_element = &nt_flows[0];
+ const void *last_element = &nt_flows[MAX_RTE_FLOWS - 1];
+ return (void *)flow < first_element || (void *)flow > last_element;
+}
+
static int convert_flow(struct rte_eth_dev *eth_dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -173,9 +180,17 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
- struct rte_flow_error *error)
+eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
@@ -185,6 +200,20 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow
if (!flow)
return 0;
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, (void *)flow, &flow_error);
+ convert_error(error, &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, flow->flw_hdl,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ rte_spinlock_unlock(&flow_lock);
+ }
+
return res;
}
@@ -194,6 +223,13 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -213,8 +249,12 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
attribute.caller_id = get_caller_id(eth_dev->data->port_id);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ void *flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
convert_error(error, &flow_error);
- return (struct rte_flow *)NULL;
+ return (struct rte_flow *)flw_hdl;
}
struct rte_flow *flow = NULL;
@@ -236,6 +276,26 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
rte_spinlock_unlock(&flow_lock);
+ if (flow) {
+ flow->flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ if (!flow->flw_hdl) {
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ flow = NULL;
+ rte_spinlock_unlock(&flow_lock);
+
+ } else {
+ rte_spinlock_lock(&flow_lock);
+ flow->caller_id = attribute.caller_id;
+ rte_spinlock_unlock(&flow_lock);
+ }
+ }
+
return flow;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 457dc58794..ec8c1612d1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -242,6 +242,20 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ /*
+ * NT Flow API
+ */
+ struct flow_handle *(*flow_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (3 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 04/86] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-30 1:56 ` Ferruh Yigit
2024-10-29 16:41 ` [PATCH v4 06/86] net/ntnic: add management API for NT flow profile Serhii Iliushyk
` (81 subsequent siblings)
86 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The flow profile implements a all flow related operations
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 15 +++++
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
.../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
7 files changed, 174 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index c80906ec50..3bdfdd4f94 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -74,6 +74,21 @@ struct flow_nic_dev {
struct flow_nic_dev *next;
};
+enum flow_nic_err_msg_e {
+ ERR_SUCCESS = 0,
+ ERR_FAILED = 1,
+ ERR_OUTPUT_TOO_MANY = 3,
+ ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_ACTION_UNSUPPORTED = 28,
+ ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_OUTPUT_INVALID = 33,
+ ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_MSG_NO_MSG
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
+
/*
* Resources
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d272c73c62..f5605e81cb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d779dc481f..d0dad8e8f8 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Error handling
+ */
+
+static const struct {
+ const char *message;
+} err_msg[] = {
+ /* 00 */ { "Operation successfully completed" },
+ /* 01 */ { "Operation failed" },
+ /* 29 */ { "Removing flow failed" },
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
+{
+ assert(msg < ERR_MSG_NO_MSG);
+
+ if (error) {
+ error->message = err_msg[msg].message;
+ error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+}
+
/*
* Resources
*/
@@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
return NULL;
}
- return NULL;
+ return profile_inline_ops->flow_create_profile_inline(dev, attr,
+ forced_vlan_vid, caller_id, item, action, error);
}
static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
@@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return -1;
}
- return -1;
+ return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
new file mode 100644
index 0000000000..a6293f5f82
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -0,0 +1,65 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "flow_api_profile_inline.h"
+#include "ntnic_mod_reg.h"
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ return NULL;
+}
+
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(fh);
+
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ return err;
+}
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow) {
+ /* Delete this flow */
+ pthread_mutex_lock(&dev->ndev->mtx);
+ err = flow_destroy_locked_profile_inline(dev, flow, error);
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ }
+
+ return err;
+}
+
+static const struct profile_inline_ops ops = {
+ /*
+ * Flow functionality
+ */
+ .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
+ .flow_create_profile_inline = flow_create_profile_inline,
+ .flow_destroy_profile_inline = flow_destroy_profile_inline,
+};
+
+void profile_inline_init(void)
+{
+ register_profile_inline_ops(&ops);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
new file mode 100644
index 0000000000..a83cc299b4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_H_
+#define _FLOW_API_PROFILE_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+#include "stream_binary_flow_api.h"
+
+/*
+ * Flow functionality
+ */
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+
+#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ad2266116f..593b56bf5b 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+static const struct profile_inline_ops *profile_inline_ops;
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops)
+{
+ profile_inline_ops = ops;
+}
+
const struct profile_inline_ops *get_profile_inline_ops(void)
{
- return NULL;
+ if (profile_inline_ops == NULL)
+ profile_inline_init();
+
+ return profile_inline_ops;
}
static const struct flow_filter_ops *flow_filter_ops;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index ec8c1612d1..d133336fad 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+struct profile_inline_ops {
+ /*
+ * Flow functionality
+ */
+ int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+ struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+};
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops);
const struct profile_inline_ops *get_profile_inline_ops(void);
+void profile_inline_init(void);
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 06/86] net/ntnic: add management API for NT flow profile
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (4 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 07/86] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
` (80 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Management API implements (re)setting of the NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 ++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 60 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 20 +++++++
.../profile_inline/flow_api_profile_inline.h | 8 +++
drivers/net/ntnic/ntnic_mod_reg.h | 8 +++
6 files changed, 102 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 3bdfdd4f94..790b2f6b03 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -55,6 +55,7 @@ struct flow_nic_dev {
uint16_t ports; /* number of in-ports addressable on this NIC */
/* flow profile this NIC is initially prepared for */
enum flow_eth_dev_profile flow_profile;
+ int flow_mgnt_prepared;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index d025677e25..52ff3cb865 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -46,6 +46,11 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+struct flow_handle {
+ struct flow_eth_dev *dev;
+ struct flow_handle *next;
+};
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d0dad8e8f8..6800a8d834 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -10,6 +10,8 @@
#include "flow_api.h"
#include "flow_filter.h"
+#define SCATTER_GATHER
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -210,10 +212,29 @@ static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_de
static void flow_ndev_reset(struct flow_nic_dev *ndev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return;
+ }
+
/* Delete all eth-port devices created on this NIC device */
while (ndev->eth_base)
flow_delete_eth_dev(ndev->eth_base);
+ /* Error check */
+ while (ndev->flow_base) {
+ NT_LOG(ERR, FILTER,
+ "ERROR : Flows still defined but all eth-ports deleted. Flow %p",
+ ndev->flow_base);
+
+ profile_inline_ops->flow_destroy_profile_inline(ndev->flow_base->dev,
+ ndev->flow_base, NULL);
+ }
+
+ profile_inline_ops->done_flow_management_of_ndev_profile_inline(ndev);
+
km_free_ndev_resource_management(&ndev->km_res_handle);
kcc_free_ndev_resource_management(&ndev->kcc_res_handle);
@@ -255,6 +276,13 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
struct flow_nic_dev *ndev = eth_dev->ndev;
if (!ndev) {
@@ -271,6 +299,20 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
/* delete all created flows from this device */
pthread_mutex_lock(&ndev->mtx);
+ struct flow_handle *flow = ndev->flow_base;
+
+ while (flow) {
+ if (flow->dev == eth_dev) {
+ struct flow_handle *flow_next = flow->next;
+ profile_inline_ops->flow_destroy_locked_profile_inline(eth_dev, flow,
+ NULL);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
/*
* remove unmatched queue if setup in QSL
* remove exception queue setting in QSL UNM
@@ -445,6 +487,24 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->port = port_no;
eth_dev->port_id = port_id;
+ /* First time then NIC is initialized */
+ if (!ndev->flow_mgnt_prepared) {
+ ndev->flow_profile = flow_profile;
+
+ /* Initialize modules if needed - recipe 0 is used as no-match and must be setup */
+ if (profile_inline_ops != NULL &&
+ profile_inline_ops->initialize_flow_management_of_ndev_profile_inline(ndev))
+ goto err_exit0;
+
+ } else {
+ /* check if same flow type is requested, otherwise fail */
+ if (ndev->flow_profile != flow_profile) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: Different flow types requested on same NIC device. Not supported.");
+ goto err_exit0;
+ }
+ }
+
/* Allocate the requested queues in HW for this dev */
for (i = 0; i < alloc_rx_queues; i++) {
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a6293f5f82..c9e4008b7e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,20 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+/*
+ * Public functions
+ */
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return -1;
+}
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
+{
+ return 0;
+}
+
struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid,
@@ -51,6 +65,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
}
static const struct profile_inline_ops ops = {
+ /*
+ * Management
+ */
+ .done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
+ .initialize_flow_management_of_ndev_profile_inline =
+ initialize_flow_management_of_ndev_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index a83cc299b4..b87f8542ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,14 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+/*
+ * Management
+ */
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index d133336fad..149c549112 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -226,6 +226,14 @@ const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
struct profile_inline_ops {
+ /*
+ * Management
+ */
+
+ int (*done_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
+ int (*initialize_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 07/86] net/ntnic: add NT flow profile management implementation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (5 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 06/86] net/ntnic: add management API for NT flow profile Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 08/86] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
` (79 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements functions required for (re)set NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 4 ++
drivers/net/ntnic/include/flow_api_engine.h | 10 ++++
drivers/net/ntnic/meson.build | 4 ++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 55 +++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 52 ++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 19 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 59 +++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 23 ++++++++
.../profile_inline/flow_api_profile_inline.c | 52 ++++++++++++++++
9 files changed, 278 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 790b2f6b03..748da89262 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -61,6 +61,10 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *group_handle;
+ void *hw_db_handle;
+ void *id_table_handle;
+
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 52ff3cb865..2497c31a08 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -6,6 +6,8 @@
#ifndef _FLOW_API_ENGINE_H_
#define _FLOW_API_ENGINE_H_
+#include <stdint.h>
+
/*
* Resource management
*/
@@ -46,6 +48,9 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_CPY_WRITERS_SUPPORTED 8
+
+
struct flow_handle {
struct flow_eth_dev *dev;
struct flow_handle *next;
@@ -55,4 +60,9 @@ void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
+/*
+ * Group management
+ */
+int flow_group_handle_create(void **handle, uint32_t group_count);
+int flow_group_handle_destroy(void **handle);
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f5605e81cb..f7292144ac 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -18,6 +18,7 @@ includes = [
include_directories('nthw/supported'),
include_directories('nthw/model'),
include_directories('nthw/flow_filter'),
+ include_directories('nthw/flow_api'),
include_directories('nim/'),
]
@@ -47,7 +48,10 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/flow_group.c',
+ 'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
+ 'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
new file mode 100644
index 0000000000..a7371f3aad
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "flow_api_engine.h"
+
+#define OWNER_ID_COUNT 256
+#define PORT_COUNT 8
+
+struct group_lookup_entry_s {
+ uint64_t ref_counter;
+ uint32_t *reverse_lookup;
+};
+
+struct group_handle_s {
+ uint32_t group_count;
+
+ uint32_t *translation_table;
+
+ struct group_lookup_entry_s *lookup_entries;
+};
+
+int flow_group_handle_create(void **handle, uint32_t group_count)
+{
+ struct group_handle_s *group_handle;
+
+ *handle = calloc(1, sizeof(struct group_handle_s));
+ group_handle = *handle;
+
+ group_handle->group_count = group_count;
+ group_handle->translation_table =
+ calloc((uint32_t)(group_count * PORT_COUNT * OWNER_ID_COUNT), sizeof(uint32_t));
+ group_handle->lookup_entries = calloc(group_count, sizeof(struct group_lookup_entry_s));
+
+ return *handle != NULL ? 0 : -1;
+}
+
+int flow_group_handle_destroy(void **handle)
+{
+ if (*handle) {
+ struct group_handle_s *group_handle = (struct group_handle_s *)*handle;
+
+ free(group_handle->translation_table);
+ free(group_handle->lookup_entries);
+
+ free(*handle);
+ *handle = NULL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
new file mode 100644
index 0000000000..9b46848e59
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "flow_id_table.h"
+
+#define NTNIC_ARRAY_BITS 14
+#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+
+struct ntnic_id_table_element {
+ union flm_handles handle;
+ uint8_t caller_id;
+ uint8_t type;
+};
+
+struct ntnic_id_table_data {
+ struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
+ pthread_mutex_t mtx;
+
+ uint32_t next_id;
+
+ uint32_t free_head;
+ uint32_t free_tail;
+ uint32_t free_count;
+};
+
+void *ntnic_id_table_create(void)
+{
+ struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
+
+ pthread_mutex_init(&handle->mtx, NULL);
+ handle->next_id = 1;
+
+ return handle;
+}
+
+void ntnic_id_table_destroy(void *id_table)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
+ free(handle->arrays[i]);
+
+ pthread_mutex_destroy(&handle->mtx);
+
+ free(id_table);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
new file mode 100644
index 0000000000..13455f1165
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLOW_ID_TABLE_H_
+#define _FLOW_ID_TABLE_H_
+
+#include <stdint.h>
+
+union flm_handles {
+ uint64_t idx;
+ void *p;
+};
+
+void *ntnic_id_table_create(void);
+void ntnic_id_table_destroy(void *id_table);
+
+#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
new file mode 100644
index 0000000000..5fda11183c
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+
+#include "flow_api_hw_db_inline.h"
+
+/******************************************************************************/
+/* Handle */
+/******************************************************************************/
+
+struct hw_db_inline_resource_db {
+ /* Actions */
+ struct hw_db_inline_resource_db_cot {
+ struct hw_db_inline_cot_data data;
+ int ref;
+ } *cot;
+
+ uint32_t nb_cot;
+
+ /* Hardware */
+
+ struct hw_db_inline_resource_db_cfn {
+ uint64_t priority;
+ int cfn_hw;
+ int ref;
+ } *cfn;
+};
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
+{
+ /* Note: calloc is required for functionality in the hw_db_inline_destroy() */
+ struct hw_db_inline_resource_db *db = calloc(1, sizeof(struct hw_db_inline_resource_db));
+
+ if (db == NULL)
+ return -1;
+
+ db->nb_cot = ndev->be.cat.nb_cat_funcs;
+ db->cot = calloc(db->nb_cot, sizeof(struct hw_db_inline_resource_db_cot));
+
+ if (db->cot == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ *db_handle = db;
+ return 0;
+}
+
+void hw_db_inline_destroy(void *db_handle)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ free(db->cot);
+
+ free(db->cfn);
+
+ free(db);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
new file mode 100644
index 0000000000..23caf73cf3
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_HW_DB_INLINE_H_
+#define _FLOW_API_HW_DB_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+
+struct hw_db_inline_cot_data {
+ uint32_t matcher_color_contrib : 4;
+ uint32_t frag_rcp : 4;
+ uint32_t padding : 24;
+};
+
+/**/
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
+void hw_db_inline_destroy(void *db_handle);
+
+#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c9e4008b7e..986196b408 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,6 +4,9 @@
*/
#include "ntlog.h"
+#include "flow_api_engine.h"
+#include "flow_api_hw_db_inline.h"
+#include "flow_id_table.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
@@ -14,11 +17,60 @@
int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+ if (!ndev->flow_mgnt_prepared) {
+ /* Check static arrays are big enough */
+ assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+
+ ndev->id_table_handle = ntnic_id_table_create();
+
+ if (ndev->id_table_handle == NULL)
+ goto err_exit0;
+
+ if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
+ goto err_exit0;
+
+ if (hw_db_inline_create(ndev, &ndev->hw_db_handle))
+ goto err_exit0;
+
+ ndev->flow_mgnt_prepared = 1;
+ }
+
+ return 0;
+
+err_exit0:
+ done_flow_management_of_ndev_profile_inline(ndev);
return -1;
}
int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ if (ndev->flow_mgnt_prepared) {
+ flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+
+ flow_group_handle_destroy(&ndev->group_handle);
+ ntnic_id_table_destroy(ndev->id_table_handle);
+
+ flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+
+ hw_mod_tpe_reset(&ndev->be);
+ flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
+ flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+
+ hw_db_inline_destroy(ndev->hw_db_handle);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ ndev->flow_mgnt_prepared = 0;
+ }
+
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 08/86] net/ntnic: add create/destroy implementation for NT flows
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (6 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 07/86] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 09/86] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
` (78 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements flow create/destroy functions with minimal capabilities
item any
action port id
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 6 +
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 105 +++
.../ntnic/include/stream_binary_flow_api.h | 4 +
drivers/net/ntnic/meson.build | 2 +
drivers/net/ntnic/nthw/flow_api/flow_group.c | 44 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 79 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 4 +
.../flow_api/profile_inline/flm_lrn_queue.c | 28 +
.../flow_api/profile_inline/flm_lrn_queue.h | 14 +
.../profile_inline/flow_api_hw_db_inline.c | 93 +++
.../profile_inline/flow_api_hw_db_inline.h | 64 ++
.../profile_inline/flow_api_profile_inline.c | 657 ++++++++++++++++++
13 files changed, 1103 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b9b87bdfe..1c653fd5a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,3 +12,9 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
Linux = Y
x86-64 = Y
+
+[rte_flow items]
+any = Y
+
+[rte_flow actions]
+port_id = Y
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 748da89262..667dad6d5f 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -68,6 +68,9 @@ struct flow_nic_dev {
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
+ /* linked list of all FLM flows created on this NIC */
+ struct flow_handle *flow_base_flm;
+ pthread_mutex_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 2497c31a08..b8da5eafba 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -7,6 +7,10 @@
#define _FLOW_API_ENGINE_H_
#include <stdint.h>
+#include <stdatomic.h>
+
+#include "hw_mod_backend.h"
+#include "stream_binary_flow_api.h"
/*
* Resource management
@@ -50,10 +54,107 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+enum flow_port_type_e {
+ PORT_NONE, /* not defined or drop */
+ PORT_INTERNAL, /* no queues attached */
+ PORT_PHY, /* MAC phy output queue */
+ PORT_VIRT, /* Memory queues to Host */
+};
+
+struct output_s {
+ uint32_t owning_port_id;/* the port who owns this output destination */
+ enum flow_port_type_e type;
+ int id; /* depending on port type: queue ID or physical port id or not used */
+ int active; /* activated */
+};
+
+struct nic_flow_def {
+ /*
+ * Frame Decoder match info collected
+ */
+ int l2_prot;
+ int l3_prot;
+ int l4_prot;
+ int tunnel_prot;
+ int tunnel_l3_prot;
+ int tunnel_l4_prot;
+ int vlans;
+ int fragmentation;
+ int ip_prot;
+ int tunnel_ip_prot;
+ /*
+ * Additional meta data for various functions
+ */
+ int in_port_override;
+ int non_empty; /* default value is -1; value 1 means flow actions update */
+ struct output_s dst_id[MAX_OUTPUT_DEST];/* define the output to use */
+ /* total number of available queues defined for all outputs - i.e. number of dst_id's */
+ int dst_num_avail;
+
+ /*
+ * Mark or Action info collection
+ */
+ uint32_t mark;
+
+ uint32_t jump_to_group;
+
+ int full_offload;
+};
+
+enum flow_handle_type {
+ FLOW_HANDLE_TYPE_FLOW,
+ FLOW_HANDLE_TYPE_FLM,
+};
struct flow_handle {
+ enum flow_handle_type type;
+ uint32_t flm_id;
+ uint16_t caller_id;
+ uint16_t learn_ignored;
+
struct flow_eth_dev *dev;
struct flow_handle *next;
+ struct flow_handle *prev;
+
+ void *user_data;
+
+ union {
+ struct {
+ /*
+ * 1st step conversion and validation of flow
+ * verified and converted flow match + actions structure
+ */
+ struct nic_flow_def *fd;
+ /*
+ * 2nd step NIC HW resource allocation and configuration
+ * NIC resource management structures
+ */
+ struct {
+ uint32_t db_idx_counter;
+ uint32_t db_idxs[RES_COUNT];
+ };
+ uint32_t port_id; /* MAC port ID or override of virtual in_port */
+ };
+
+ struct {
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_data[10];
+ uint8_t flm_prot;
+ uint8_t flm_kid;
+ uint8_t flm_prio;
+ uint8_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint32_t flm_nat_ipv4;
+ uint16_t flm_nat_port;
+ uint8_t flm_dscp;
+ uint32_t flm_teid;
+ uint8_t flm_rqi;
+ uint8_t flm_qfi;
+ };
+ };
};
void km_free_ndev_resource_management(void **handle);
@@ -65,4 +166,8 @@ void kcc_free_ndev_resource_management(void **handle);
*/
int flow_group_handle_create(void **handle, uint32_t group_count);
int flow_group_handle_destroy(void **handle);
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out);
+
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index a6244d4082..d878b848c2 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -8,6 +8,10 @@
#include "rte_flow.h"
#include "rte_flow_driver.h"
+
+/* Max RSS hash key length in bytes */
+#define MAX_RSS_KEY_LEN 40
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f7292144ac..e1fef37ccb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -50,6 +50,8 @@ sources = files(
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
+ 'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
index a7371f3aad..f76986b178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_group.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -53,3 +53,47 @@ int flow_group_handle_destroy(void **handle)
return 0;
}
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out)
+{
+ struct group_handle_s *group_handle = (struct group_handle_s *)handle;
+ uint32_t *table_ptr;
+ uint32_t lookup;
+
+ if (group_handle == NULL || group_in >= group_handle->group_count || port_id >= PORT_COUNT)
+ return -1;
+
+ /* Don't translate group 0 */
+ if (group_in == 0) {
+ *group_out = 0;
+ return 0;
+ }
+
+ table_ptr = &group_handle->translation_table[port_id * OWNER_ID_COUNT * PORT_COUNT +
+ owner_id * OWNER_ID_COUNT + group_in];
+ lookup = *table_ptr;
+
+ if (lookup == 0) {
+ for (lookup = 1; lookup < group_handle->group_count &&
+ group_handle->lookup_entries[lookup].ref_counter > 0;
+ ++lookup)
+ ;
+
+ if (lookup < group_handle->group_count) {
+ group_handle->lookup_entries[lookup].reverse_lookup = table_ptr;
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+
+ *table_ptr = lookup;
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+ }
+
+ *group_out = lookup;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 9b46848e59..5635ac4524 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -4,6 +4,7 @@
*/
#include <pthread.h>
+#include <stdint.h>
#include <stdlib.h>
#include <string.h>
@@ -11,6 +12,10 @@
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+#define NTNIC_ARRAY_MASK (NTNIC_ARRAY_SIZE - 1)
+#define NTNIC_MAX_ID (NTNIC_ARRAY_SIZE * NTNIC_ARRAY_SIZE)
+#define NTNIC_MAX_ID_MASK (NTNIC_MAX_ID - 1)
+#define NTNIC_MIN_FREE 1000
struct ntnic_id_table_element {
union flm_handles handle;
@@ -29,6 +34,36 @@ struct ntnic_id_table_data {
uint32_t free_count;
};
+static inline struct ntnic_id_table_element *
+ntnic_id_table_array_find_element(struct ntnic_id_table_data *handle, uint32_t id)
+{
+ uint32_t idx_d1 = id & NTNIC_ARRAY_MASK;
+ uint32_t idx_d2 = (id >> NTNIC_ARRAY_BITS) & NTNIC_ARRAY_MASK;
+
+ if (handle->arrays[idx_d2] == NULL) {
+ handle->arrays[idx_d2] =
+ calloc(NTNIC_ARRAY_SIZE, sizeof(struct ntnic_id_table_element));
+ }
+
+ return &handle->arrays[idx_d2][idx_d1];
+}
+
+static inline uint32_t ntnic_id_table_array_pop_free_id(struct ntnic_id_table_data *handle)
+{
+ uint32_t id = 0;
+
+ if (handle->free_count > NTNIC_MIN_FREE) {
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_tail);
+ id = handle->free_tail;
+
+ handle->free_tail = element->handle.idx & NTNIC_MAX_ID_MASK;
+ handle->free_count -= 1;
+ }
+
+ return id;
+}
+
void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
@@ -50,3 +85,47 @@ void ntnic_id_table_destroy(void *id_table)
free(id_table);
}
+
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
+
+ if (new_id == 0)
+ new_id = handle->next_id++;
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, new_id);
+ element->caller_id = caller_id;
+ element->type = type;
+ memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+
+ return new_id;
+}
+
+void ntnic_id_table_free_id(void *id_table, uint32_t id)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *current_element =
+ ntnic_id_table_array_find_element(handle, id);
+ memset(current_element, 0, sizeof(struct ntnic_id_table_element));
+
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_head);
+ element->handle.idx = id;
+ handle->free_head = id;
+ handle->free_count += 1;
+
+ if (handle->free_tail == 0)
+ handle->free_tail = handle->free_head;
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index 13455f1165..e190fe4a11 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -16,4 +16,8 @@ union flm_handles {
void *ntnic_id_table_create(void);
void ntnic_id_table_destroy(void *id_table);
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type);
+void ntnic_id_table_free_id(void *id_table, uint32_t id);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
new file mode 100644
index 0000000000..ad7efafe08
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+
+#include "hw_mod_flm_v25.h"
+
+#include "flm_lrn_queue.h"
+
+#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ unsigned int n = rte_ring_enqueue_zc_burst_elem_start(q, ELEM_SIZE, 1, &zcd, NULL);
+ return (n == 0) ? NULL : zcd.ptr1;
+}
+
+void flm_lrn_queue_release_write_buffer(void *q)
+{
+ rte_ring_enqueue_zc_elem_finish(q, 1);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
new file mode 100644
index 0000000000..8cee0c8e78
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_LRN_QUEUE_H_
+#define _FLM_LRN_QUEUE_H_
+
+#include <stdint.h>
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q);
+void flm_lrn_queue_release_write_buffer(void *q);
+
+#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5fda11183c..4ea9387c80 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -3,7 +3,11 @@
*/
+#include "hw_mod_backend.h"
+#include "flow_api_engine.h"
+
#include "flow_api_hw_db_inline.h"
+#include "rte_common.h"
/******************************************************************************/
/* Handle */
@@ -57,3 +61,92 @@ void hw_db_inline_destroy(void *db_handle)
free(db);
}
+
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size)
+{
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_COT:
+ hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/******************************************************************************/
+/* COT */
+/******************************************************************************/
+
+static int hw_db_inline_cot_compare(const struct hw_db_inline_cot_data *data1,
+ const struct hw_db_inline_cot_data *data2)
+{
+ return data1->matcher_color_contrib == data2->matcher_color_contrib &&
+ data1->frag_rcp == data2->frag_rcp;
+}
+
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cot_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_COT;
+
+ for (uint32_t i = 1; i < db->nb_cot; ++i) {
+ int ref = db->cot[i].ref;
+
+ if (ref > 0 && hw_db_inline_cot_compare(data, &db->cot[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cot_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cot[idx.ids].ref = 1;
+ memcpy(&db->cot[idx.ids].data, data, sizeof(struct hw_db_inline_cot_data));
+
+ return idx;
+}
+
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cot[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cot[idx.ids].ref -= 1;
+
+ if (db->cot[idx.ids].ref <= 0) {
+ memset(&db->cot[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cot_data));
+ db->cot[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 23caf73cf3..0116af015d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -9,15 +9,79 @@
#include "flow_api.h"
+#define HW_DB_INLINE_MAX_QST_PER_QSL 128
+#define HW_DB_INLINE_MAX_ENCAP_SIZE 128
+
+#define HW_DB_IDX \
+ union { \
+ struct { \
+ uint32_t id1 : 8; \
+ uint32_t id2 : 8; \
+ uint32_t id3 : 8; \
+ uint32_t type : 7; \
+ uint32_t error : 1; \
+ }; \
+ struct { \
+ uint32_t ids : 24; \
+ }; \
+ uint32_t raw; \
+ }
+
+/* Strongly typed int types */
+struct hw_db_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_cot_idx {
+ HW_DB_IDX;
+};
+
+enum hw_db_idx_type {
+ HW_DB_IDX_TYPE_NONE = 0,
+ HW_DB_IDX_TYPE_COT,
+};
+
+/* Functionality data types */
+struct hw_db_inline_qsl_data {
+ uint32_t discard : 1;
+ uint32_t drop : 1;
+ uint32_t table_size : 7;
+ uint32_t retransmit : 1;
+ uint32_t padding : 22;
+
+ struct {
+ uint16_t queue : 7;
+ uint16_t queue_en : 1;
+ uint16_t tx_port : 3;
+ uint16_t tx_port_en : 1;
+ uint16_t padding : 4;
+ } table[HW_DB_INLINE_MAX_QST_PER_QSL];
+};
+
struct hw_db_inline_cot_data {
uint32_t matcher_color_contrib : 4;
uint32_t frag_rcp : 4;
uint32_t padding : 24;
};
+struct hw_db_inline_hsh_data {
+ uint32_t func;
+ uint64_t hash_mask;
+ uint8_t key[MAX_RSS_KEY_LEN];
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
void hw_db_inline_destroy(void *db_handle);
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size);
+
+/**/
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data);
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 986196b408..7f9869a511 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,12 +4,545 @@
*/
#include "ntlog.h"
+#include "nt_util.h"
+
+#include "hw_mod_backend.h"
+#include "flm_lrn_queue.h"
+#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
#include "flow_id_table.h"
+#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_common.h>
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
+static void *flm_lrn_queue_arr;
+
+struct flm_flow_key_def_s {
+ union {
+ struct {
+ uint64_t qw0_dyn : 7;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 7;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 7;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 7;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_proto : 1;
+ uint64_t inner_proto : 1;
+ uint64_t pad : 2;
+ };
+ uint64_t data;
+ };
+ uint32_t mask[10];
+};
+
+/*
+ * Flow Matcher functionality
+ */
+static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
+{
+ struct flow_eth_dev *dev = ndev->eth_base;
+
+ while (dev) {
+ if (dev->port_id == port_id)
+ return dev->port;
+
+ dev = dev->next;
+ }
+
+ return UINT8_MAX;
+}
+
+static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base)
+ ndev->flow_base->prev = fh;
+
+ fh->next = ndev->flow_base;
+ fh->prev = NULL;
+ ndev->flow_base = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ struct flow_handle *next = fh->next;
+ struct flow_handle *prev = fh->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base == fh) {
+ ndev->flow_base = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base_flm)
+ ndev->flow_base_flm->prev = fh;
+
+ fh->next = ndev->flow_base_flm;
+ fh->prev = NULL;
+ ndev->flow_base_flm = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
+{
+ struct flow_handle *next = fh_flm->next;
+ struct flow_handle *prev = fh_flm->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base_flm = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base_flm == fh_flm) {
+ ndev->flow_base_flm = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
+{
+ if (fd) {
+ fd->full_offload = -1;
+ fd->in_port_override = -1;
+ fd->mark = UINT32_MAX;
+ fd->jump_to_group = UINT32_MAX;
+
+ fd->l2_prot = -1;
+ fd->l3_prot = -1;
+ fd->l4_prot = -1;
+ fd->vlans = 0;
+ fd->tunnel_prot = -1;
+ fd->tunnel_l3_prot = -1;
+ fd->tunnel_l4_prot = -1;
+ fd->fragmentation = -1;
+ fd->ip_prot = -1;
+ fd->tunnel_ip_prot = -1;
+
+ fd->non_empty = -1;
+ }
+
+ return fd;
+}
+
+static inline struct nic_flow_def *allocate_nic_flow_def(void)
+{
+ return prepare_nic_flow_def(calloc(1, sizeof(struct nic_flow_def)));
+}
+
+static bool fd_has_empty_pattern(const struct nic_flow_def *fd)
+{
+ return fd && fd->vlans == 0 && fd->l2_prot < 0 && fd->l3_prot < 0 && fd->l4_prot < 0 &&
+ fd->tunnel_prot < 0 && fd->tunnel_l3_prot < 0 && fd->tunnel_l4_prot < 0 &&
+ fd->ip_prot < 0 && fd->tunnel_ip_prot < 0 && fd->non_empty < 0;
+}
+
+static inline const void *memcpy_mask_if(void *dest, const void *src, const void *mask,
+ size_t count)
+{
+ if (mask == NULL)
+ return src;
+
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+ const unsigned char *mask_ptr = (const unsigned char *)mask;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] = src_ptr[i] & mask_ptr[i];
+
+ return dest;
+}
+
+static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLM)
+ return -1;
+
+ if (flm_op == NT_FLM_OP_LEARN) {
+ union flm_handles flm_h;
+ flm_h.p = fh;
+ fh->flm_id = ntnic_id_table_get_id(fh->dev->ndev->id_table_handle, flm_h,
+ fh->caller_id, 1);
+ }
+
+ uint32_t flm_id = fh->flm_id;
+
+ if (flm_op == NT_FLM_OP_UNLEARN) {
+ ntnic_id_table_free_id(fh->dev->ndev->id_table_handle, flm_id);
+
+ if (fh->learn_ignored == 1)
+ return 0;
+ }
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->id = flm_id;
+
+ learn_record->qw0[0] = fh->flm_data[9];
+ learn_record->qw0[1] = fh->flm_data[8];
+ learn_record->qw0[2] = fh->flm_data[7];
+ learn_record->qw0[3] = fh->flm_data[6];
+ learn_record->qw4[0] = fh->flm_data[5];
+ learn_record->qw4[1] = fh->flm_data[4];
+ learn_record->qw4[2] = fh->flm_data[3];
+ learn_record->qw4[3] = fh->flm_data[2];
+ learn_record->sw8 = fh->flm_data[1];
+ learn_record->sw9 = fh->flm_data[0];
+ learn_record->prot = fh->flm_prot;
+
+ /* Last non-zero mtr is used for statistics */
+ uint8_t mbrs = 0;
+
+ learn_record->vol_idx = mbrs;
+
+ learn_record->nat_ip = fh->flm_nat_ipv4;
+ learn_record->nat_port = fh->flm_nat_port;
+ learn_record->nat_en = fh->flm_nat_ipv4 || fh->flm_nat_port ? 1 : 0;
+
+ learn_record->dscp = fh->flm_dscp;
+ learn_record->teid = fh->flm_teid;
+ learn_record->qfi = fh->flm_qfi;
+ learn_record->rqi = fh->flm_rqi;
+ /* Lower 10 bits used for RPL EXT PTR */
+ learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+
+ learn_record->ent = 0;
+ learn_record->op = flm_op & 0xf;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->prio = fh->flm_prio & 0x3;
+ learn_record->ft = fh->flm_ft;
+ learn_record->kid = fh->flm_kid;
+ learn_record->eor = 1;
+ learn_record->scrub_prof = 0;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+ return 0;
+}
+
+/*
+ * This function must be callable without locking any mutexes
+ */
+static int interpret_flow_actions(const struct flow_eth_dev *dev,
+ const struct rte_flow_action action[],
+ const struct rte_flow_action *action_mask,
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint32_t *num_dest_port,
+ uint32_t *num_queues)
+{
+ unsigned int encap_decap_order = 0;
+
+ *num_dest_port = 0;
+ *num_queues = 0;
+
+ if (action == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow actions missing");
+ return -1;
+ }
+
+ /*
+ * Gather flow match + actions and convert into internal flow definition structure (struct
+ * nic_flow_def_s) This is the 1st step in the flow creation - validate, convert and
+ * prepare
+ */
+ for (int aidx = 0; action[aidx].type != RTE_FLOW_ACTION_TYPE_END; ++aidx) {
+ switch (action[aidx].type) {
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_PORT_ID", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_port_id port_id_tmp;
+ const struct rte_flow_action_port_id *port_id =
+ memcpy_mask_if(&port_id_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_port_id));
+
+ if (*num_dest_port > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple port_id actions for one flow is not supported");
+ flow_nic_set_error(ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED,
+ error);
+ return -1;
+ }
+
+ uint8_t port = get_port_from_port_id(dev->ndev, port_id->id);
+
+ if (fd->dst_num_avail == MAX_OUTPUT_DEST) {
+ NT_LOG(ERR, FILTER, "Too many output destinations");
+ flow_nic_set_error(ERR_OUTPUT_TOO_MANY, error);
+ return -1;
+ }
+
+ if (port >= dev->ndev->be.num_phy_ports) {
+ NT_LOG(ERR, FILTER, "Phy port out of range");
+ flow_nic_set_error(ERR_OUTPUT_INVALID, error);
+ return -1;
+ }
+
+ /* New destination port to add */
+ fd->dst_id[fd->dst_num_avail].owning_port_id = port_id->id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_PHY;
+ fd->dst_id[fd->dst_num_avail].id = (int)port;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ if (fd->full_offload < 0)
+ fd->full_offload = 1;
+
+ *num_dest_port += 1;
+
+ NT_LOG(DBG, FILTER, "Phy port ID: %i", (int)port);
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
+ action[aidx].type);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+ }
+
+ if (!(encap_decap_order == 0 || encap_decap_order == 2)) {
+ NT_LOG(ERR, FILTER, "Invalid encap/decap actions");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int interpret_flow_elements(const struct flow_eth_dev *dev,
+ const struct rte_flow_item elem[],
+ struct nic_flow_def *fd __rte_unused,
+ struct rte_flow_error *error,
+ uint16_t implicit_vlan_vid __rte_unused,
+ uint32_t *in_port_id,
+ uint32_t *packet_data,
+ uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ *in_port_id = UINT32_MAX;
+
+ memset(packet_data, 0x0, sizeof(uint32_t) * 10);
+ memset(packet_mask, 0x0, sizeof(uint32_t) * 10);
+ memset(key_def, 0x0, sizeof(struct flm_flow_key_def_s));
+
+ if (elem == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow items missing");
+ return -1;
+ }
+
+ int qw_reserved_mac = 0;
+ int qw_reserved_ipv6 = 0;
+
+ int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
+
+ if (qw_free < 0) {
+ NT_LOG(ERR, FILTER, "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ANY:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
+ (int)elem[eidx].type);
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
+ uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
+ uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
+ uint32_t priority __rte_unused)
+{
+ struct nic_flow_def *fd;
+ struct flow_handle fh_copy;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLOW)
+ return -1;
+
+ memcpy(&fh_copy, fh, sizeof(struct flow_handle));
+ memset(fh, 0x0, sizeof(struct flow_handle));
+ fd = fh_copy.fd;
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->caller_id = fh_copy.caller_id;
+ fh->dev = fh_copy.dev;
+ fh->next = fh_copy.next;
+ fh->prev = fh_copy.prev;
+ fh->user_data = fh_copy.user_data;
+
+ fh->flm_db_idx_counter = fh_copy.db_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+
+ free(fd);
+
+ return 0;
+}
+
+static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
+ const struct nic_flow_def *fd __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ uint32_t group __rte_unused,
+ uint32_t local_idxs[] __rte_unused,
+ uint32_t *local_idx_counter __rte_unused,
+ uint16_t *flm_rpl_ext_ptr __rte_unused,
+ uint32_t *flm_ft __rte_unused,
+ uint32_t *flm_scrub __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ return 0;
+}
+
+static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct nic_flow_def *fd,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
+ struct rte_flow_error *error, uint32_t port_id,
+ uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ struct flm_flow_key_def_s *key_def __rte_unused)
+{
+ struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLOW;
+ fh->port_id = port_id;
+ fh->dev = dev;
+ fh->fd = fd;
+ fh->caller_id = caller_id;
+
+ struct hw_db_inline_qsl_data qsl_data;
+
+ struct hw_db_inline_hsh_data hsh_data;
+
+ if (attr->group > 0 && fd_has_empty_pattern(fd)) {
+ /*
+ * Default flow for group 1..32
+ */
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, NULL, NULL, NULL, error)) {
+ goto error_out;
+ }
+
+ nic_insert_flow(dev->ndev, fh);
+
+ } else if (attr->group > 0) {
+ /*
+ * Flow for group 1..32
+ */
+
+ /* Setup Actions */
+ uint16_t flm_rpl_ext_ptr = 0;
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, &flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Program flow */
+ convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ flm_scrub, attr->priority & 0x3);
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ } else {
+ /*
+ * Flow for group 0
+ */
+ nic_insert_flow(dev->ndev, fh);
+ }
+
+ return fh;
+
+error_out:
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ } else {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ }
+
+ free(fh);
+
+ return NULL;
+}
/*
* Public functions
@@ -82,6 +615,92 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
const struct rte_flow_action action[],
struct rte_flow_error *error)
{
+ struct flow_handle *fh = NULL;
+ int res;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t num_dest_port;
+ uint32_t num_queues;
+
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct rte_flow_attr attr_local;
+ memcpy(&attr_local, attr, sizeof(struct rte_flow_attr));
+ uint16_t forced_vlan_vid_local = forced_vlan_vid;
+ uint16_t caller_id_local = caller_id;
+
+ if (attr_local.group > 0)
+ forced_vlan_vid_local = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL)
+ goto err_exit;
+
+ res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res)
+ goto err_exit;
+
+ res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
+ packet_data, packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ fd->jump_to_group, &fd->jump_to_group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (attr_local.group > 0 &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ attr_local.group, &attr_local.group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ /* Create and flush filter to NIC */
+ fh = create_flow_filter(dev, fd, &attr_local, forced_vlan_vid_local,
+ caller_id_local, error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ if (!fh)
+ goto err_exit;
+
+ NT_LOG(DBG, FILTER, "New FlOW: fh (flow handle) %p, fd (flow definition) %p", fh, fd);
+ NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
+ dev, dev->ndev->adapter_no, dev->port, fh, fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return fh;
+
+err_exit:
+
+ if (fh)
+ flow_destroy_locked_profile_inline(dev, fh, NULL);
+
+ else
+ free(fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
}
@@ -96,6 +715,44 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
flow_nic_set_error(ERR_SUCCESS, error);
+ /* take flow out of ndev list - may not have been put there yet */
+ if (fh->type == FLOW_HANDLE_TYPE_FLM)
+ nic_remove_flow_flm(dev->ndev, fh);
+
+ else
+ nic_remove_flow(dev->ndev, fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ flm_flow_programming(fh, NT_FLM_OP_UNLEARN);
+
+ } else {
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ free(fh->fd);
+ }
+
+ if (err) {
+ NT_LOG(ERR, FILTER, "FAILED removing flow: %p", fh);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ }
+
+ free(fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
return err;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 09/86] net/ntnic: add infrastructure for for flow actions and items
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (7 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 08/86] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 10/86] net/ntnic: add action queue Serhii Iliushyk
` (77 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add etities(utilities, structures, etc) required for flow API
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/flow_api.h | 34 ++++++++
drivers/net/ntnic/include/flow_api_engine.h | 46 +++++++++++
drivers/net/ntnic/include/hw_mod_backend.h | 33 ++++++++
drivers/net/ntnic/nthw/flow_api/flow_km.c | 81 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 68 +++++++++++++++-
5 files changed, 258 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 667dad6d5f..7f031ccda8 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -85,13 +85,47 @@ struct flow_nic_dev {
enum flow_nic_err_msg_e {
ERR_SUCCESS = 0,
ERR_FAILED = 1,
+ ERR_MEMORY = 2,
ERR_OUTPUT_TOO_MANY = 3,
+ ERR_RSS_TOO_MANY_QUEUES = 4,
+ ERR_VLAN_TYPE_NOT_SUPPORTED = 5,
+ ERR_VXLAN_HEADER_NOT_ACCEPTED = 6,
+ ERR_VXLAN_POP_INVALID_RECIRC_PORT = 7,
+ ERR_VXLAN_POP_FAILED_CREATING_VTEP = 8,
+ ERR_MATCH_VLAN_TOO_MANY = 9,
+ ERR_MATCH_INVALID_IPV6_HDR = 10,
+ ERR_MATCH_TOO_MANY_TUNNEL_PORTS = 11,
ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_FAILED_BY_HW_LIMITS = 13,
ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_MATCH_FAILED_TOO_COMPLEX = 15,
+ ERR_ACTION_REPLICATION_FAILED = 16,
+ ERR_ACTION_OUTPUT_RESOURCE_EXHAUSTION = 17,
+ ERR_ACTION_TUNNEL_HEADER_PUSH_OUTPUT_LIMIT = 18,
+ ERR_ACTION_INLINE_MOD_RESOURCE_EXHAUSTION = 19,
+ ERR_ACTION_RETRANSMIT_RESOURCE_EXHAUSTION = 20,
+ ERR_ACTION_FLOW_COUNTER_EXHAUSTION = 21,
+ ERR_ACTION_INTERNAL_RESOURCE_EXHAUSTION = 22,
+ ERR_INTERNAL_QSL_COMPARE_FAILED = 23,
+ ERR_INTERNAL_CAT_FUNC_REUSE_FAILED = 24,
+ ERR_MATCH_ENTROPHY_FAILED = 25,
+ ERR_MATCH_CAM_EXHAUSTED = 26,
+ ERR_INTERNAL_VIRTUAL_PORT_CREATION_FAILED = 27,
ERR_ACTION_UNSUPPORTED = 28,
ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_ACTION_NO_OUTPUT_DEFINED_USE_DEFAULT = 30,
+ ERR_ACTION_NO_OUTPUT_QUEUE_FOUND = 31,
+ ERR_MATCH_UNSUPPORTED_ETHER_TYPE = 32,
ERR_OUTPUT_INVALID = 33,
+ ERR_MATCH_PARTIAL_OFFLOAD_NOT_SUPPORTED = 34,
+ ERR_MATCH_CAT_CAM_EXHAUSTED = 35,
+ ERR_MATCH_KCC_KEY_CLASH = 36,
+ ERR_MATCH_CAT_CAM_FAILED = 37,
+ ERR_PARTIAL_FLOW_MARK_TOO_BIG = 38,
+ ERR_FLOW_PRIORITY_VALUE_INVALID = 39,
ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_RSS_TOO_LONG_KEY = 41,
+ ERR_ACTION_AGE_UNSUPPORTED_GROUP_0 = 42,
ERR_MSG_NO_MSG
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b8da5eafba..13fad2760a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -54,6 +54,30 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+#define MAX_MATCH_FIELDS 16
+
+struct match_elem_s {
+ int masked_for_tcam; /* if potentially selected for TCAM */
+ uint32_t e_word[4];
+ uint32_t e_mask[4];
+
+ int extr_start_offs_id;
+ int8_t rel_offs;
+ uint32_t word_len;
+};
+
+struct km_flow_def_s {
+ struct flow_api_backend_s *be;
+
+ /* For collect flow elements and sorting */
+ struct match_elem_s match[MAX_MATCH_FIELDS];
+ int num_ftype_elem;
+
+ /* Flow information */
+ /* HW input port ID needed for compare. In port must be identical on flow types */
+ uint32_t port_id;
+};
+
enum flow_port_type_e {
PORT_NONE, /* not defined or drop */
PORT_INTERNAL, /* no queues attached */
@@ -99,6 +123,25 @@ struct nic_flow_def {
uint32_t jump_to_group;
int full_offload;
+
+ /*
+ * Modify field
+ */
+ struct {
+ uint32_t select;
+ union {
+ uint8_t value8[16];
+ uint16_t value16[8];
+ uint32_t value32[4];
+ };
+ } modify_field[MAX_CPY_WRITERS_SUPPORTED];
+
+ uint32_t modify_field_count;
+
+ /*
+ * Key Matcher flow definitions
+ */
+ struct km_flow_def_s km;
};
enum flow_handle_type {
@@ -159,6 +202,9 @@ struct flow_handle {
void km_free_ndev_resource_management(void **handle);
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start, int8_t offset);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 34154c65f8..99b207a01c 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -133,6 +133,39 @@ enum km_flm_if_select_e {
unsigned int alloced_size; \
int debug
+enum {
+ PROT_OTHER = 0,
+ PROT_L2_ETH2 = 1,
+};
+
+enum {
+ PROT_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_L4_ICMP = 4
+};
+
+enum {
+ PROT_TUN_L3_OTHER = 0,
+ PROT_TUN_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_ICMP = 4
+};
+
+
+enum {
+ CPY_SELECT_DSCP_IPV4 = 0,
+ CPY_SELECT_DSCP_IPV6 = 1,
+ CPY_SELECT_RQI_QFI = 2,
+ CPY_SELECT_IPV4 = 3,
+ CPY_SELECT_PORT = 4,
+ CPY_SELECT_TEID = 5,
+};
+
struct common_func_s {
COMMON_FUNC_INFO_S;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index e04cd5e857..237e9f7b4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -3,10 +3,38 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <assert.h>
#include <stdlib.h>
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
+#include "nt_util.h"
+
+#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+
+static const struct cam_match_masks_s {
+ uint32_t word_len;
+ uint32_t key_mask[4];
+} cam_masks[] = {
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff } }, /* IP6_SRC, IP6_DST */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffff0000 } }, /* DMAC,SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffff0000, 0x00000000, 0xffff0000 } }, /* DMAC,ethtype */
+ { 4, { 0x00000000, 0x0000ffff, 0xffffffff, 0xffff0000 } }, /* SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000 } }, /* ETH_128 */
+ { 2, { 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 } }, /* IP4_COMBINED */
+ /*
+ * ETH_TYPE, IP4_TTL_PROTO, IP4_SRC, IP4_DST, IP6_FLOW_TC,
+ * IP6_NEXT_HDR_HOP, TP_PORT_COMBINED, SIDEBAND_VNI
+ */
+ { 1, { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IP4_IHL_TOS, TP_PORT_SRC32_OR_ICMP, TCP_CTRL */
+ { 1, { 0xffff0000, 0x00000000, 0x00000000, 0x00000000 } },
+ { 1, { 0x0000ffff, 0x00000000, 0x00000000, 0x00000000 } }, /* TP_PORT_DST32 */
+ /* IPv4 TOS mask bits used often by OVS */
+ { 1, { 0x00030000, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IPv6 TOS mask bits used often by OVS */
+ { 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
+};
void km_free_ndev_resource_management(void **handle)
{
@@ -17,3 +45,56 @@ void km_free_ndev_resource_management(void **handle)
*handle = NULL;
}
+
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start_id, int8_t offset)
+{
+ /* valid word_len 1,2,4 */
+ if (word_len == 3) {
+ word_len = 4;
+ e_word[3] = 0;
+ e_mask[3] = 0;
+ }
+
+ if (word_len < 1 || word_len > 4) {
+ assert(0);
+ return -1;
+ }
+
+ for (unsigned int i = 0; i < word_len; i++) {
+ km->match[km->num_ftype_elem].e_word[i] = e_word[i];
+ km->match[km->num_ftype_elem].e_mask[i] = e_mask[i];
+ }
+
+ km->match[km->num_ftype_elem].word_len = word_len;
+ km->match[km->num_ftype_elem].rel_offs = offset;
+ km->match[km->num_ftype_elem].extr_start_offs_id = start_id;
+
+ /*
+ * Determine here if this flow may better be put into TCAM
+ * Otherwise it will go into CAM
+ * This is dependent on a cam_masks list defined above
+ */
+ km->match[km->num_ftype_elem].masked_for_tcam = 1;
+
+ for (unsigned int msk = 0; msk < NUM_CAM_MASKS; msk++) {
+ if (word_len == cam_masks[msk].word_len) {
+ int match = 1;
+
+ for (unsigned int wd = 0; wd < word_len; wd++) {
+ if (e_mask[wd] != cam_masks[msk].key_mask[wd]) {
+ match = 0;
+ break;
+ }
+ }
+
+ if (match) {
+ /* Can go into CAM */
+ km->match[km->num_ftype_elem].masked_for_tcam = 0;
+ }
+ }
+ }
+
+ km->num_ftype_elem++;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7f9869a511..0f136ee164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -416,10 +416,67 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return 0;
}
-static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
- uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
- uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
- uint32_t priority __rte_unused)
+static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def *fd,
+ const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
+ uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
+{
+ switch (fd->l4_prot) {
+ case PROT_L4_ICMP:
+ fh->flm_prot = fd->ip_prot;
+ break;
+
+ default:
+ switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_ICMP:
+ fh->flm_prot = fd->tunnel_ip_prot;
+ break;
+
+ default:
+ fh->flm_prot = 0;
+ break;
+ }
+
+ break;
+ }
+
+ memcpy(fh->flm_data, packet_data, sizeof(uint32_t) * 10);
+
+ fh->flm_kid = flm_key_id;
+ fh->flm_rpl_ext_ptr = rpl_ext_ptr;
+ fh->flm_prio = (uint8_t)priority;
+ fh->flm_ft = (uint8_t)flm_ft;
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+ case CPY_SELECT_RQI_QFI:
+ fh->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ fh->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ fh->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ fh->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ fh->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
+ uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
+ uint32_t flm_scrub, uint32_t priority)
{
struct nic_flow_def *fd;
struct flow_handle fh_copy;
@@ -443,6 +500,9 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
for (int i = 0; i < RES_COUNT; ++i)
fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+ copy_fd_to_fh_flm(fh, fd, packet_data, flm_key_id, flm_ft, rpl_ext_ptr, flm_scrub,
+ priority);
+
free(fd);
return 0;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 10/86] net/ntnic: add action queue
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (8 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 09/86] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 11/86] net/ntnic: add action mark Serhii Iliushyk
` (76 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_QUEUE
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 37 +++++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 1c653fd5a0..5b3c26da05 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,3 +18,4 @@ any = Y
[rte_flow actions]
port_id = Y
+queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0f136ee164..a3fe2fe902 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -23,6 +23,15 @@
static void *flm_lrn_queue_arr;
+static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
+{
+ for (int i = 0; i < dev->num_queues; ++i)
+ if (dev->rx_queue[i].id == id)
+ return dev->rx_queue[i].hw_id;
+
+ return -1;
+}
+
struct flm_flow_key_def_s {
union {
struct {
@@ -349,6 +358,34 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_queue queue_tmp;
+ const struct rte_flow_action_queue *queue =
+ memcpy_mask_if(&queue_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_queue));
+
+ int hw_id = rx_queue_idx_to_hw_id(dev, queue->index);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE port %u, queue index: %u, hw id %u",
+ dev, dev->port, queue->index, hw_id);
+
+ fd->full_offload = 0;
+ *num_queues += 1;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 11/86] net/ntnic: add action mark
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (9 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 10/86] net/ntnic: add action queue Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 12/86] net/ntnic: add ation jump Serhii Iliushyk
` (75 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_MARK
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 5b3c26da05..42ac9f9c31 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,5 +17,6 @@ x86-64 = Y
any = Y
[rte_flow actions]
+mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a3fe2fe902..96b7192edc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -386,6 +386,22 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_mark mark_tmp;
+ const struct rte_flow_action_mark *mark =
+ memcpy_mask_if(&mark_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_mark));
+
+ fd->mark = mark->id;
+ NT_LOG(DBG, FILTER, "Mark: %i", mark->id);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 12/86] net/ntnic: add ation jump
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (10 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 11/86] net/ntnic: add action mark Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 13/86] net/ntnic: add action drop Serhii Iliushyk
` (74 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_JUMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 42ac9f9c31..f3334fc86d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+jump = Y
mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 96b7192edc..603039374a 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -402,6 +402,23 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_jump jump_tmp;
+ const struct rte_flow_action_jump *jump =
+ memcpy_mask_if(&jump_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_jump));
+
+ fd->jump_to_group = jump->group;
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP: group %u",
+ dev, jump->group);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 13/86] net/ntnic: add action drop
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (11 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 12/86] net/ntnic: add ation jump Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 14/86] net/ntnic: add item eth Serhii Iliushyk
` (73 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_DROP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index f3334fc86d..372653695d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+drop = Y
jump = Y
mark = Y
port_id = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 603039374a..64168fcc7d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -419,6 +419,18 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_DROP", dev);
+
+ if (action[aidx].conf) {
+ fd->dst_id[fd->dst_num_avail].owning_port_id = 0;
+ fd->dst_id[fd->dst_num_avail].id = 0;
+ fd->dst_id[fd->dst_num_avail].type = PORT_NONE;
+ fd->dst_num_avail++;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 14/86] net/ntnic: add item eth
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (12 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 13/86] net/ntnic: add action drop Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 15/86] net/ntnic: add item IPv4 Serhii Iliushyk
` (72 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ETH
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 23 +++
.../profile_inline/flow_api_profile_inline.c | 180 ++++++++++++++++++
3 files changed, 204 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 372653695d..36b8212bae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -15,6 +15,7 @@ x86-64 = Y
[rte_flow items]
any = Y
+eth = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 99b207a01c..0c22129fb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,29 @@ enum {
} \
} while (0)
+static inline int is_non_zero(const void *addr, size_t n)
+{
+ size_t i = 0;
+ const uint8_t *p = (const uint8_t *)addr;
+
+ for (i = 0; i < n; i++)
+ if (p[i] != 0)
+ return 1;
+
+ return 0;
+}
+
+enum frame_offs_e {
+ DYN_L2 = 1,
+ DYN_L3 = 4,
+ DYN_L4 = 7,
+ DYN_L4_PAYLOAD = 8,
+ DYN_TUN_L3 = 13,
+ DYN_TUN_L4 = 16,
+};
+
+/* Sideband info bit indicator */
+
enum km_flm_if_select_e {
KM_FLM_IF_FIRST = 0,
KM_FLM_IF_SECOND = 1
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 64168fcc7d..93f666a054 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -55,6 +55,36 @@ struct flm_flow_key_def_s {
/*
* Flow Matcher functionality
*/
+static inline void set_key_def_qw(struct flm_flow_key_def_s *key_def, unsigned int qw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(qw < 2);
+
+ if (qw == 0) {
+ key_def->qw0_dyn = dyn & 0x7f;
+ key_def->qw0_ofs = ofs & 0xff;
+
+ } else {
+ key_def->qw4_dyn = dyn & 0x7f;
+ key_def->qw4_ofs = ofs & 0xff;
+ }
+}
+
+static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned int sw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(sw < 2);
+
+ if (sw == 0) {
+ key_def->sw8_dyn = dyn & 0x7f;
+ key_def->sw8_ofs = ofs & 0xff;
+
+ } else {
+ key_def->sw9_dyn = dyn & 0x7f;
+ key_def->sw9_ofs = ofs & 0xff;
+ }
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -457,6 +487,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
+ uint32_t any_count = 0;
+
+ unsigned int qw_counter = 0;
+ unsigned int sw_counter = 0;
+
*in_port_id = UINT32_MAX;
memset(packet_data, 0x0, sizeof(uint32_t) * 10);
@@ -472,6 +507,28 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH: {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (eth_spec != NULL && eth_mask != NULL) {
+ if (is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6)) {
+ qw_reserved_mac += 1;
+ }
+ }
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
if (qw_free < 0) {
@@ -484,6 +541,129 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
switch (elem[eidx].type) {
case RTE_FLOW_ITEM_TYPE_ANY:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ any_count += 1;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ETH",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (any_count > 0) {
+ NT_LOG(ERR, FILTER,
+ "Tunneled L2 ethernet not supported");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (eth_spec == NULL || eth_mask == NULL) {
+ fd->l2_prot = PROT_L2_ETH2;
+ break;
+ }
+
+ int non_zero = is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6);
+
+ if (non_zero ||
+ (eth_mask->ether_type != 0 && sw_counter >= 2)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ((eth_spec->dst_addr.addr_bytes[0] &
+ eth_mask->dst_addr.addr_bytes[0]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[1] &
+ eth_mask->dst_addr.addr_bytes[1]) << 16) +
+ ((eth_spec->dst_addr.addr_bytes[2] &
+ eth_mask->dst_addr.addr_bytes[2]) << 8) +
+ (eth_spec->dst_addr.addr_bytes[3] &
+ eth_mask->dst_addr.addr_bytes[3]);
+
+ qw_data[1] = ((eth_spec->dst_addr.addr_bytes[4] &
+ eth_mask->dst_addr.addr_bytes[4]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[5] &
+ eth_mask->dst_addr.addr_bytes[5]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[0] &
+ eth_mask->src_addr.addr_bytes[0]) << 8) +
+ (eth_spec->src_addr.addr_bytes[1] &
+ eth_mask->src_addr.addr_bytes[1]);
+
+ qw_data[2] = ((eth_spec->src_addr.addr_bytes[2] &
+ eth_mask->src_addr.addr_bytes[2]) << 24) +
+ ((eth_spec->src_addr.addr_bytes[3] &
+ eth_mask->src_addr.addr_bytes[3]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[4] &
+ eth_mask->src_addr.addr_bytes[4]) << 8) +
+ (eth_spec->src_addr.addr_bytes[5] &
+ eth_mask->src_addr.addr_bytes[5]);
+
+ qw_data[3] = ntohs(eth_spec->ether_type &
+ eth_mask->ether_type) << 16;
+
+ qw_mask[0] = (eth_mask->dst_addr.addr_bytes[0] << 24) +
+ (eth_mask->dst_addr.addr_bytes[1] << 16) +
+ (eth_mask->dst_addr.addr_bytes[2] << 8) +
+ eth_mask->dst_addr.addr_bytes[3];
+
+ qw_mask[1] = (eth_mask->dst_addr.addr_bytes[4] << 24) +
+ (eth_mask->dst_addr.addr_bytes[5] << 16) +
+ (eth_mask->src_addr.addr_bytes[0] << 8) +
+ eth_mask->src_addr.addr_bytes[1];
+
+ qw_mask[2] = (eth_mask->src_addr.addr_bytes[2] << 24) +
+ (eth_mask->src_addr.addr_bytes[3] << 16) +
+ (eth_mask->src_addr.addr_bytes[4] << 8) +
+ eth_mask->src_addr.addr_bytes[5];
+
+ qw_mask[3] = ntohs(eth_mask->ether_type) << 16;
+
+ km_add_match_elem(&fd->km,
+ &qw_data[(size_t)(qw_counter * 4)],
+ &qw_mask[(size_t)(qw_counter * 4)], 4, DYN_L2, 0);
+ set_key_def_qw(key_def, qw_counter, DYN_L2, 0);
+ qw_counter += 1;
+
+ if (!non_zero)
+ qw_free -= 1;
+
+ } else if (eth_mask->ether_type != 0) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(eth_mask->ether_type) << 16;
+ sw_data[0] = ntohs(eth_spec->ether_type) << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, DYN_L2, 12);
+ set_key_def_sw(key_def, sw_counter, DYN_L2, 12);
+ sw_counter += 1;
+ }
+
+ fd->l2_prot = PROT_L2_ETH2;
+ }
+
+ break;
+
dev->ndev->adapter_no, dev->port);
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 15/86] net/ntnic: add item IPv4
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (13 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 14/86] net/ntnic: add item eth Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-30 1:55 ` Ferruh Yigit
2024-10-29 16:41 ` [PATCH v4 16/86] net/ntnic: add item ICMP Serhii Iliushyk
` (71 subsequent siblings)
86 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 162 ++++++++++++++++++
2 files changed, 163 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 36b8212bae..bae25d2e2d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+ipv4 = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 93f666a054..d5d853351e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -664,7 +664,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv4 *ipv4_spec =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
+
+ if (ipv4_spec == NULL || ipv4_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.version_ihl != 0 ||
+ ipv4_mask->hdr.type_of_service != 0 ||
+ ipv4_mask->hdr.total_length != 0 ||
+ ipv4_mask->hdr.packet_id != 0 ||
+ (ipv4_mask->hdr.fragment_offset != 0 &&
+ (ipv4_spec->hdr.fragment_offset != 0xffff ||
+ ipv4_mask->hdr.fragment_offset != 0xffff)) ||
+ ipv4_mask->hdr.time_to_live != 0 ||
+ ipv4_mask->hdr.hdr_checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv4 field not support by running SW version.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (ipv4_spec->hdr.fragment_offset == 0xffff &&
+ ipv4_mask->hdr.fragment_offset == 0xffff) {
+ fd->fragmentation = 0xfe;
+ }
+
+ int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
+ (ipv4_mask->hdr.dst_addr != 0) +
+ (ipv4_mask->hdr.next_proto_id != 0);
+
+ if (match_cnt <= 0) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (qw_free > 0 &&
+ (match_cnt >= 2 ||
+ (match_cnt == 1 && sw_counter >= 2))) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED,
+ error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_mask[0] = 0;
+ qw_data[0] = 0;
+
+ qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
+ qw_data[1] = ipv4_spec->hdr.next_proto_id
+ << 16 & qw_mask[1];
+
+ qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
+ qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
+
+ qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
+ qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.src_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 12);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.dst_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 16);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.next_proto_id) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
+ sw_data[0] = ipv4_spec->hdr.next_proto_id
+ << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ sw_counter += 1;
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 16/86] net/ntnic: add item ICMP
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (14 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 15/86] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 17/86] net/ntnic: add item port ID Serhii Iliushyk
` (70 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ICMP
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 101 ++++++++++++++++++
2 files changed, 102 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index bae25d2e2d..d403ea01f3 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+icmp = Y
ipv4 = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d5d853351e..6bf0ff8821 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -827,6 +827,107 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp *icmp_spec =
+ (const struct rte_flow_item_icmp *)elem[eidx].spec;
+ const struct rte_flow_item_icmp *icmp_mask =
+ (const struct rte_flow_item_icmp *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->hdr.icmp_cksum != 0 ||
+ icmp_mask->hdr.icmp_ident != 0 ||
+ icmp_mask->hdr.icmp_seq_nb != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->hdr.icmp_type || icmp_mask->hdr.icmp_code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ sw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter,
+ any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 17/86] net/ntnic: add item port ID
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (15 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 16/86] net/ntnic: add item ICMP Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 18/86] net/ntnic: add item void Serhii Iliushyk
` (69 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_PORT_ID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../flow_api/profile_inline/flow_api_profile_inline.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index d403ea01f3..cdf119c4ae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,6 +18,7 @@ any = Y
eth = Y
icmp = Y
ipv4 = Y
+port_id = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6bf0ff8821..efefd52979 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -928,6 +928,17 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
+ dev->ndev->adapter_no, dev->port);
+
+ if (elem[eidx].spec) {
+ *in_port_id =
+ ((const struct rte_flow_item_port_id *)elem[eidx].spec)->id;
+ }
+
+ break;
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 18/86] net/ntnic: add item void
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (16 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 17/86] net/ntnic: add item port ID Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 19/86] net/ntnic: add item UDP Serhii Iliushyk
` (68 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_VOID
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../nthw/flow_api/profile_inline/flow_api_profile_inline.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index efefd52979..e47014615e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -939,6 +939,10 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+
+ case RTE_FLOW_ITEM_TYPE_VOID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VOID",
+ dev->ndev->adapter_no, dev->port);
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 19/86] net/ntnic: add item UDP
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (17 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 18/86] net/ntnic: add item void Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 20/86] net/ntnic: add action TCP Serhii Iliushyk
` (67 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_UDP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 103 ++++++++++++++++++
3 files changed, 106 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index cdf119c4ae..61a3d87909 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+udp = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 0c22129fb4..a95fb69870 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e47014615e..3d4bb6e1eb 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -828,6 +828,101 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_udp *udp_spec =
+ (const struct rte_flow_item_udp *)elem[eidx].spec;
+ const struct rte_flow_item_udp *udp_mask =
+ (const struct rte_flow_item_udp *)elem[eidx].mask;
+
+ if (udp_spec == NULL || udp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (udp_mask->hdr.dgram_len != 0 ||
+ udp_mask->hdr.dgram_cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested UDP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (udp_mask->hdr.src_port || udp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(udp_mask->hdr.src_port) << 16) |
+ ntohs(udp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(udp_mask->hdr.src_port)
+ << 16) | ntohs(udp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -961,12 +1056,20 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 20/86] net/ntnic: add action TCP
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (18 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 19/86] net/ntnic: add item UDP Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 21/86] net/ntnic: add action VLAN Serhii Iliushyk
` (66 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_TCP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 108 ++++++++++++++++++
3 files changed, 111 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 61a3d87909..e3c3982895 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+tcp = Y
udp = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a95fb69870..a1aa74caf5 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -177,6 +178,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 3d4bb6e1eb..f24178a164 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1024,6 +1024,106 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_tcp *tcp_spec =
+ (const struct rte_flow_item_tcp *)elem[eidx].spec;
+ const struct rte_flow_item_tcp *tcp_mask =
+ (const struct rte_flow_item_tcp *)elem[eidx].mask;
+
+ if (tcp_spec == NULL || tcp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (tcp_mask->hdr.sent_seq != 0 ||
+ tcp_mask->hdr.recv_ack != 0 ||
+ tcp_mask->hdr.data_off != 0 ||
+ tcp_mask->hdr.tcp_flags != 0 ||
+ tcp_mask->hdr.rx_win != 0 ||
+ tcp_mask->hdr.cksum != 0 ||
+ tcp_mask->hdr.tcp_urp != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested TCP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (tcp_mask->hdr.src_port || tcp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ sw_data[0] =
+ ((ntohs(tcp_spec->hdr.src_port) << 16) |
+ ntohs(tcp_spec->hdr.dst_port)) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(tcp_spec->hdr.src_port)
+ << 16) | ntohs(tcp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1056,6 +1156,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_L4_UDP:
fh->flm_prot = 17;
break;
@@ -1066,6 +1170,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_TUN_L4_UDP:
fh->flm_prot = 17;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 21/86] net/ntnic: add action VLAN
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (19 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 20/86] net/ntnic: add action TCP Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 22/86] net/ntnic: add item SCTP Serhii Iliushyk
` (65 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_VLAN
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 94 +++++++++++++++++++
3 files changed, 96 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e3c3982895..8b4821d6d0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -21,6 +21,7 @@ ipv4 = Y
port_id = Y
tcp = Y
udp = Y
+vlan = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a1aa74caf5..82ac3d0ff3 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -134,6 +134,7 @@ static inline int is_non_zero(const void *addr, size_t n)
enum frame_offs_e {
DYN_L2 = 1,
+ DYN_FIRST_VLAN = 2,
DYN_L3 = 4,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f24178a164..7c1b632dc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -504,6 +504,20 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return -1;
}
+ if (implicit_vlan_vid > 0) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = 0x0fff;
+ sw_data[0] = implicit_vlan_vid & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1, DYN_FIRST_VLAN, 0);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN, 0);
+ sw_counter += 1;
+
+ fd->vlans += 1;
+ }
+
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
@@ -664,6 +678,86 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VLAN",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_vlan_hdr *vlan_spec =
+ (const struct rte_vlan_hdr *)elem[eidx].spec;
+ const struct rte_vlan_hdr *vlan_mask =
+ (const struct rte_vlan_hdr *)elem[eidx].mask;
+
+ if (vlan_spec == NULL || vlan_mask == NULL) {
+ fd->vlans += 1;
+ break;
+ }
+
+ if (!vlan_mask->vlan_tci && !vlan_mask->eth_proto)
+ break;
+
+ if (implicit_vlan_vid > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple VLANs not supported for implicit VLAN patterns.");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM,
+ error);
+ return -1;
+ }
+
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ sw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_qw(key_def, qw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ fd->vlans += 1;
+ }
+
+ break;
case RTE_FLOW_ITEM_TYPE_IPV4:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 22/86] net/ntnic: add item SCTP
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (20 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 21/86] net/ntnic: add action VLAN Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 23/86] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
` (64 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_SCTP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 102 ++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b4821d6d0..6691b6dce2 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+sctp = Y
tcp = Y
udp = Y
vlan = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 82ac3d0ff3..f1c57fa9fc 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -169,6 +169,7 @@ enum {
enum {
PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
+ PROT_L4_SCTP = 3,
PROT_L4_ICMP = 4
};
@@ -181,6 +182,7 @@ enum {
PROT_TUN_L4_OTHER = 0,
PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
+ PROT_TUN_L4_SCTP = 3,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7c1b632dc0..9460325cf6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1017,6 +1017,100 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ NT_LOG(DBG, FILTER, "Adap %i,Port %i:RTE_FLOW_ITEM_TYPE_SCTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_sctp *sctp_spec =
+ (const struct rte_flow_item_sctp *)elem[eidx].spec;
+ const struct rte_flow_item_sctp *sctp_mask =
+ (const struct rte_flow_item_sctp *)elem[eidx].mask;
+
+ if (sctp_spec == NULL || sctp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (sctp_mask->hdr.tag != 0 || sctp_mask->hdr.cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested SCTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (sctp_mask->hdr.src_port || sctp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -1258,6 +1352,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
@@ -1272,6 +1370,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_TUN_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 23/86] net/ntnic: add items IPv6 and ICMPv6
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (21 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 22/86] net/ntnic: add item SCTP Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 24/86] net/ntnic: add action modify filed Serhii Iliushyk
` (63 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_IPV6
* RTE_FLOW_ITEM_TYPE_ICMP6
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 27 ++
.../profile_inline/flow_api_profile_inline.c | 272 ++++++++++++++++++
4 files changed, 303 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 6691b6dce2..320d3c7e0b 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,7 +17,9 @@ x86-64 = Y
any = Y
eth = Y
icmp = Y
+icmp6 = Y
ipv4 = Y
+ipv6 = Y
port_id = Y
sctp = Y
tcp = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index f1c57fa9fc..4f381bc0ef 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -164,6 +164,7 @@ enum {
enum {
PROT_L3_IPV4 = 1,
+ PROT_L3_IPV6 = 2
};
enum {
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
+ PROT_TUN_L3_IPV6 = 2
};
enum {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 6800a8d834..2aee2ee973 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -47,6 +47,33 @@ static const struct {
} err_msg[] = {
/* 00 */ { "Operation successfully completed" },
/* 01 */ { "Operation failed" },
+ /* 02 */ { "Memory allocation failed" },
+ /* 03 */ { "Too many output destinations" },
+ /* 04 */ { "Too many output queues for RSS" },
+ /* 05 */ { "The VLAN TPID specified is not supported" },
+ /* 06 */ { "The VxLan Push header specified is not accepted" },
+ /* 07 */ { "While interpreting VxLan Pop action, could not find a destination port" },
+ /* 08 */ { "Failed in creating a HW-internal VTEP port" },
+ /* 09 */ { "Too many VLAN tag matches" },
+ /* 10 */ { "IPv6 invalid header specified" },
+ /* 11 */ { "Too many tunnel ports. HW limit reached" },
+ /* 12 */ { "Unknown or unsupported flow match element received" },
+ /* 13 */ { "Match failed because of HW limitations" },
+ /* 14 */ { "Match failed because of HW resource limitations" },
+ /* 15 */ { "Match failed because of too complex element definitions" },
+ /* 16 */ { "Action failed. To too many output destinations" },
+ /* 17 */ { "Action Output failed, due to HW resource exhaustion" },
+ /* 18 */ { "Push Tunnel Header action cannot output to multiple destination queues" },
+ /* 19 */ { "Inline action HW resource exhaustion" },
+ /* 20 */ { "Action retransmit/recirculate HW resource exhaustion" },
+ /* 21 */ { "Flow counter HW resource exhaustion" },
+ /* 22 */ { "Internal HW resource exhaustion to handle Actions" },
+ /* 23 */ { "Internal HW QSL compare failed" },
+ /* 24 */ { "Internal CAT CFN reuse failed" },
+ /* 25 */ { "Match variations too complex" },
+ /* 26 */ { "Match failed because of CAM/TCAM full" },
+ /* 27 */ { "Internal creation of a tunnel end point port failed" },
+ /* 28 */ { "Unknown or unsupported flow action received" },
/* 29 */ { "Removing flow failed" },
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9460325cf6..12cbfa97a8 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -538,6 +538,22 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6: {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec != NULL && ipv6_mask != NULL) {
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16))
+ qw_reserved_ipv6 += 1;
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16))
+ qw_reserved_ipv6 += 1;
+ }
+ }
+ break;
+
default:
break;
}
@@ -922,6 +938,163 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec == NULL || ipv6_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ break;
+ }
+
+ if (ipv6_mask->hdr.vtc_flow != 0 ||
+ ipv6_mask->hdr.payload_len != 0 ||
+ ipv6_mask->hdr.hop_limits != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv6 field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.src_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.src_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ qw_counter += 1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.dst_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 24);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 24);
+ qw_counter += 1;
+ }
+
+ if (ipv6_mask->hdr.proto != 0) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv6_mask->hdr.proto << 8;
+ sw_data[0] = ipv6_spec->hdr.proto << 8 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = 0;
+ qw_data[1] = ipv6_mask->hdr.proto << 8;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = 0;
+ qw_mask[1] = ipv6_spec->hdr.proto << 8;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_UDP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
dev->ndev->adapter_no, dev->port);
@@ -1212,6 +1385,105 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp6 *icmp_spec =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].spec;
+ const struct rte_flow_item_icmp6 *icmp_mask =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP6 field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->type || icmp_mask->code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ sw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_TCP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 24/86] net/ntnic: add action modify filed
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (22 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 23/86] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 25/86] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
` (62 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ACTION_TYPE_MODIFY_FIELD
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 181 ++++++++++++++++++
4 files changed, 190 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 320d3c7e0b..4201c8e8b9 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -30,5 +30,6 @@ vlan = Y
drop = Y
jump = Y
mark = Y
+modify_field = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 13fad2760a..f6557d0d20 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,10 @@ struct nic_flow_def {
*/
struct {
uint32_t select;
+ uint32_t dyn;
+ uint32_t ofs;
+ uint32_t len;
+ uint32_t level;
union {
uint8_t value8[16];
uint16_t value16[8];
@@ -137,6 +141,9 @@ struct nic_flow_def {
} modify_field[MAX_CPY_WRITERS_SUPPORTED];
uint32_t modify_field_count;
+ uint8_t ttl_sub_enable;
+ uint8_t ttl_sub_ipv4;
+ uint8_t ttl_sub_outer;
/*
* Key Matcher flow definitions
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 4f381bc0ef..6a8a38636f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -140,6 +140,7 @@ enum frame_offs_e {
DYN_L4_PAYLOAD = 8,
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
+ DYN_TUN_L4_PAYLOAD = 17,
};
/* Sideband info bit indicator */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 12cbfa97a8..1c6404b542 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -323,6 +323,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
{
unsigned int encap_decap_order = 0;
+ uint64_t modify_field_use_flags = 0x0;
+
*num_dest_port = 0;
*num_queues = 0;
@@ -461,6 +463,185 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
+ {
+ /* Note: This copy method will not work for FLOW_FIELD_POINTER */
+ struct rte_flow_action_modify_field modify_field_tmp;
+ const struct rte_flow_action_modify_field *modify_field =
+ memcpy_mask_if(&modify_field_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_modify_field));
+
+ uint64_t modify_field_use_flag = 0;
+
+ if (modify_field->src.field != RTE_FLOW_FIELD_VALUE) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only src type VALUE is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.level > 2) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only dst level 0, 1, and 2 is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL ||
+ modify_field->dst.field == RTE_FLOW_FIELD_IPV6_HOPLIMIT) {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SUB) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SUB is supported for TTL/HOPLIMIT.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->ttl_sub_enable) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD TTL/HOPLIMIT resource already in use.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->ttl_sub_enable = 1;
+ fd->ttl_sub_ipv4 =
+ (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL)
+ ? 1
+ : 0;
+ fd->ttl_sub_outer = (modify_field->dst.level <= 1) ? 1 : 0;
+
+ } else {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SET) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SET is supported in general.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->modify_field_count >=
+ dev->ndev->be.tpe.nb_cpy_writers) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD exceeded maximum of %u MODIFY_FIELD actions.",
+ dev->ndev->be.tpe.nb_cpy_writers);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ int mod_outer = modify_field->dst.level <= 1;
+
+ switch (modify_field->dst.field) {
+ case RTE_FLOW_FIELD_IPV4_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 1;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV6_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV6;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ /*
+ * len=2 is needed because
+ * IPv6 DSCP overlaps 2 bytes.
+ */
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_PSC_QFI:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_RQI_QFI;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 14;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 12;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 16;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_SRC:
+ case RTE_FLOW_FIELD_UDP_PORT_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_DST:
+ case RTE_FLOW_FIELD_UDP_PORT_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 2;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_TEID:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_TEID;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 4;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type is not supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ modify_field_use_flag = 1
+ << fd->modify_field[fd->modify_field_count].select;
+
+ if (modify_field_use_flag & modify_field_use_flags) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type hardware resource already used.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ memcpy(fd->modify_field[fd->modify_field_count].value8,
+ modify_field->src.value, 16);
+
+ fd->modify_field[fd->modify_field_count].level =
+ modify_field->dst.level;
+
+ modify_field_use_flags |= modify_field_use_flag;
+ fd->modify_field_count += 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 25/86] net/ntnic: add items gtp and actions raw encap/decap
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (23 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 24/86] net/ntnic: add action modify filed Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 26/86] net/ntnic: add cat module Serhii Iliushyk
` (61 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_GTP
* RTE_FLOW_ITEM_TYPE_GTP_PSC
* RTE_FLOW_ACTION_TYPE_RAW_ENCAP
* RTE_FLOW_ACTION_TYPE_RAW_DECAP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 4 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/flow_api_engine.h | 40 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/include/stream_binary_flow_api.h | 22 ++
.../profile_inline/flow_api_profile_inline.c | 366 +++++++++++++++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 291 +++++++++++++-
7 files changed, 726 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4201c8e8b9..4cb9509742 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,8 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+gtp = Y
+gtp_psc = Y
icmp = Y
icmp6 = Y
ipv4 = Y
@@ -33,3 +35,5 @@ mark = Y
modify_field = Y
port_id = Y
queue = Y
+raw_decap = Y
+raw_encap = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 179542d2b2..70e6cad195 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,8 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct flow_action_raw_encap encap;
+ struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
};
@@ -52,6 +54,8 @@ enum nt_rte_flow_item_type {
};
extern rte_spinlock_t flow_lock;
+
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out);
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index f6557d0d20..b1d39b919b 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -56,6 +56,29 @@ enum res_type_e {
#define MAX_MATCH_FIELDS 16
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+ uint32_t user_port_id;
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+ uint16_t ip_csum_precalc;
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
+};
+
struct match_elem_s {
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
@@ -124,6 +147,23 @@ struct nic_flow_def {
int full_offload;
+ /*
+ * Action push tunnel
+ */
+ struct tunnel_header_s tun_hdr;
+
+ /*
+ * If DPDK RTE tunnel helper API used
+ * this holds the tunnel if used in flow
+ */
+ struct tunnel_s *tnl;
+
+ /*
+ * Header Stripper
+ */
+ int header_strip_end_dyn;
+ int header_strip_end_ofs;
+
/*
* Modify field
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6a8a38636f..1b45ea4296 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -175,6 +175,10 @@ enum {
PROT_L4_ICMP = 4
};
+enum {
+ PROT_TUN_GTPV1U = 6,
+};
+
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index d878b848c2..8097518d61 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -18,6 +18,7 @@
#define FLOW_MAX_QUEUES 128
+#define RAW_ENCAP_DECAP_ELEMS_MAX 16
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
@@ -31,6 +32,27 @@ struct flow_queue_id_s {
int hw_id;
};
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ */
+struct flow_action_raw_encap {
+ uint8_t *data;
+ uint8_t *preserve;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ */
+struct flow_action_raw_decap {
+ uint8_t *data;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
struct flow_eth_dev; /* port device */
struct flow_handle;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1c6404b542..07a2a12cc9 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -463,6 +463,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
+
+ if (action[aidx].conf) {
+ const struct flow_action_raw_encap *encap =
+ (const struct flow_action_raw_encap *)action[aidx].conf;
+ const struct flow_action_raw_encap *encap_mask = action_mask
+ ? (const struct flow_action_raw_encap *)action_mask[aidx]
+ .conf
+ : NULL;
+ const struct rte_flow_item *items = encap->items;
+
+ if (encap_decap_order != 1) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (encap->size == 0 || encap->size > 255 ||
+ encap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP data/size invalid.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 2;
+
+ fd->tun_hdr.len = (uint8_t)encap->size;
+
+ if (encap_mask) {
+ memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data,
+ encap_mask->data, fd->tun_hdr.len);
+
+ } else {
+ memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len);
+ }
+
+ while (items->type != RTE_FLOW_ITEM_TYPE_END) {
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ fd->tun_hdr.l2_len = 14;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->tun_hdr.nb_vlans += 1;
+ fd->tun_hdr.l2_len += 4;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ fd->tun_hdr.ip_version = 4;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->tun_hdr.ip_version = 6;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr);
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->tun_hdr.l4_len =
+ sizeof(struct rte_flow_item_icmp6);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 3] = 0xfd;
+ break;
+
+ default:
+ break;
+ }
+
+ items++;
+ }
+
+ if (fd->tun_hdr.nb_vlans > 3) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Encapsulation with %d vlans not supported.",
+ (int)fd->tun_hdr.nb_vlans);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ /* Convert encap data to 128-bit little endian */
+ for (size_t i = 0; i < (encap->size + 15) / 16; ++i) {
+ uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16;
+
+ for (unsigned int j = 0; j < 8; ++j) {
+ uint8_t t = data[j];
+ data[j] = data[15 - j];
+ data[15 - j] = t;
+ }
+ }
+ }
+
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev);
+
+ if (action[aidx].conf) {
+ /* Mask is N/A for RAW_DECAP */
+ const struct flow_action_raw_decap *decap =
+ (const struct flow_action_raw_decap *)action[aidx].conf;
+
+ if (encap_decap_order != 0) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (decap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_DECAP must decap something.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 1;
+
+ switch (decap->items[decap->item_count - 2].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->header_strip_end_dyn = DYN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->header_strip_end_dyn = DYN_L4;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->header_strip_end_dyn = DYN_L4_PAYLOAD;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ fd->header_strip_end_dyn = DYN_TUN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ default:
+ fd->header_strip_end_dyn = DYN_L2;
+ fd->header_strip_end_ofs = 0;
+ break;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
{
@@ -1765,6 +1961,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_hdr *gtp_spec =
+ (const struct rte_gtp_hdr *)elem[eidx].spec;
+ const struct rte_gtp_hdr *gtp_mask =
+ (const struct rte_gtp_hdr *)elem[eidx].mask;
+
+ if (gtp_spec == NULL || gtp_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_mask->gtp_hdr_info != 0 ||
+ gtp_mask->msg_type != 0 || gtp_mask->plen != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_mask->teid) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_mask->teid);
+ sw_data[0] =
+ ntohl(gtp_spec->teid) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_spec->teid);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_mask->teid);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP_PSC:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_spec =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec;
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_mask =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask;
+
+ if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_psc_mask->type != 0 ||
+ gtp_psc_mask->ext_hdr_len != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP PSC field is not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_psc_mask->qfi) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ sw_data[0] = ntohl(gtp_psc_spec->qfi) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 14);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_psc_spec->qfi);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 14);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1928,7 +2292,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
- uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index b9d723c9dd..20b5cb2835 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -16,6 +16,224 @@
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out)
+{
+ int hdri = 0;
+ int pkti = 0;
+
+ /* Ethernet */
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_ether_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ether_hdr);
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* VLAN */
+ while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) {
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_vlan_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_vlan_hdr);
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 3 */
+ uint8_t next_header = 0;
+
+ if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) {
+ if (size - pkti < (int)sizeof(struct rte_ipv4_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 9];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv4_hdr);
+
+ } else if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6) &&
+ (data[pkti] & 0xF0) == 0x60) {
+ if (size - pkti < (int)sizeof(struct rte_ipv6_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 6];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv6_hdr);
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 4 */
+ int gtpu_encap = 0;
+
+ if (next_header == 1) { /* ICMP */
+ if (size - pkti < (int)sizeof(struct rte_icmp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 58) { /* ICMP6 */
+ if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 6) { /* TCP */
+ if (size - pkti < (int)sizeof(struct rte_tcp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_tcp_hdr);
+
+ } else if (next_header == 17) { /* UDP */
+ if (size - pkti < (int)sizeof(struct rte_udp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port ==
+ rte_cpu_to_be_16(RTE_GTPU_UDP_PORT);
+
+ hdri += 1;
+ pkti += sizeof(struct rte_udp_hdr);
+
+ } else if (next_header == 132) {/* SCTP */
+ if (size - pkti < (int)sizeof(struct rte_sctp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_sctp_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* GTPv1-U */
+ if (gtpu_encap) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti])
+ ->e;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr);
+
+ if (extension_present_bit) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti])
+ ->next_ext;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr_ext_word);
+
+ while (next_ext) {
+ size_t ext_len = data[pkti] * 4;
+
+ if (size - pkti < (int)ext_len)
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_ext = data[pkti + ext_len - 1];
+
+ hdri += 1;
+ pkti += ext_len;
+ }
+ }
+ }
+
+ if (size - pkti != 0)
+ return -1;
+
+interpret_end:
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_END;
+ out[hdri].spec = NULL;
+ out[hdri].mask = NULL;
+
+ return hdri + 1;
+}
+
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
{
if (error) {
@@ -95,13 +313,78 @@ int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item
return (type >= 0) ? 0 : -1;
}
-int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- int max_elem __rte_unused,
- uint32_t queue_offset __rte_unused)
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset)
{
+ int aidx = 0;
int type = -1;
+ do {
+ type = actions[aidx].type;
+ if (type >= 0) {
+ action->flow_actions[aidx].type = type;
+
+ /*
+ * Non-compatible actions handled here
+ */
+ switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
+ const struct rte_flow_action_raw_decap *decap =
+ (const struct rte_flow_action_raw_decap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(decap->data, NULL, decap->size,
+ action->decap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->decap.data = decap->data;
+ action->decap.size = decap->size;
+ action->decap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->decap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: {
+ const struct rte_flow_action_raw_encap *encap =
+ (const struct rte_flow_action_raw_encap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(encap->data, encap->preserve,
+ encap->size, action->encap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->encap.data = encap->data;
+ action->encap.preserve = encap->preserve;
+ action->encap.size = encap->size;
+ action->encap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->encap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE: {
+ const struct rte_flow_action_queue *queue =
+ (const struct rte_flow_action_queue *)actions[aidx].conf;
+ action->queue.index = queue->index + queue_offset;
+ action->flow_actions[aidx].conf = &action->queue;
+ }
+ break;
+
+ default: {
+ action->flow_actions[aidx].conf = actions[aidx].conf;
+ }
+ break;
+ }
+
+ aidx++;
+
+ if (aidx == max_elem)
+ return -1;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
return (type >= 0) ? 0 : -1;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 26/86] net/ntnic: add cat module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (24 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 25/86] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 27/86] net/ntnic: add SLC LR module Serhii Iliushyk
` (60 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Categorizer module’s main purpose is to is select the behavior
of other modules in the FPGA pipeline depending on a protocol check.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 24 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 267 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 165 +++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 47 +++
.../profile_inline/flow_api_profile_inline.c | 83 ++++++
5 files changed, 586 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 1b45ea4296..87fc16ecb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -315,11 +315,35 @@ int hw_mod_cat_reset(struct flow_api_backend_s *be);
int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
+/* KCE/KCS/FTE KM */
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+/* KCE/KCS/FTE FLM */
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count);
+
int hw_mod_cat_kcc_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_exo_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index d266760123..9164ec1ae0 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -951,6 +951,97 @@ static int hw_mod_cat_fte_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_fte_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_fte_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ const uint32_t key_cnt = (_VER_ >= 20) ? 4 : 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8 * be->cat.nb_flow_types * key_cnt)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v18.fte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v21.fte[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, value, 1);
+}
+
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -964,6 +1055,45 @@ int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cte_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cte_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTE_ENABLE_BM:
+ GET_SET(be->cat.v18.cte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -979,6 +1109,51 @@ int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cts_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cts_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ int addr_size = (be->cat.cts_num + 1) / 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs * addr_size)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTS_CAT_A:
+ GET_SET(be->cat.v18.cts[index].cat_a, value);
+ break;
+
+ case HW_CAT_CTS_CAT_B:
+ GET_SET(be->cat.v18.cts[index].cat_b, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -992,6 +1167,98 @@ int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cot_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cot_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_COT_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->cat.v18.cot[index], (uint8_t)*value,
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->cat.v18.cot, struct cat_v18_cot_s, index, *value);
+ break;
+
+ case HW_CAT_COT_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->cat.v18.cot, struct cat_v18_cot_s, index, *value,
+ be->max_categories);
+ break;
+
+ case HW_CAT_COT_COPY_FROM:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memcpy(&be->cat.v18.cot[index], &be->cat.v18.cot[*value],
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COLOR:
+ GET_SET(be->cat.v18.cot[index].color, value);
+ break;
+
+ case HW_CAT_COT_KM:
+ GET_SET(be->cat.v18.cot[index].km, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cot_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4ea9387c80..addd5f288f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -22,6 +22,14 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
+ /* Items */
+ struct hw_db_inline_resource_db_cat {
+ struct hw_db_inline_cat_data data;
+ int ref;
+ } *cat;
+
+ uint32_t nb_cat;
+
/* Hardware */
struct hw_db_inline_resource_db_cfn {
@@ -47,6 +55,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_cat = ndev->be.cat.nb_cat_funcs;
+ db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
+
+ if (db->cat == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -56,6 +72,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->cat);
free(db->cfn);
@@ -70,6 +87,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_CAT:
+ hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_COT:
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
@@ -80,6 +101,69 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+/******************************************************************************/
+/* Filter */
+/******************************************************************************/
+
+/*
+ * Setup a filter to match:
+ * All packets in CFN checks
+ * All packets in KM
+ * All packets in FLM with look-up C FT equal to specified argument
+ *
+ * Setup a QSL recipe to DROP all matching packets
+ *
+ * Note: QSL recipe 0 uses DISCARD in order to allow for exception paths (UNMQ)
+ * Consequently another QSL recipe with hard DROP is needed
+ */
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id)
+{
+ (void)ft;
+ (void)qsl_hw_id;
+
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+ (void)offset;
+
+ /* Select and enable QSL recipe */
+ if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
+ return -1;
+
+ if (hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6))
+ return -1;
+
+ if (hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0x8))
+ return -1;
+
+ if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ /* Make all CFN checks TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, 0x0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x1))
+ return -1;
+
+ /* Final match: look-up_A == TRUE && look-up_C == TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3))
+ return -1;
+
+ if (hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ return 0;
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -150,3 +234,84 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
db->cot[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* CAT */
+/******************************************************************************/
+
+static int hw_db_inline_cat_compare(const struct hw_db_inline_cat_data *data1,
+ const struct hw_db_inline_cat_data *data2)
+{
+ return data1->vlan_mask == data2->vlan_mask &&
+ data1->mac_port_mask == data2->mac_port_mask &&
+ data1->ptc_mask_frag == data2->ptc_mask_frag &&
+ data1->ptc_mask_l2 == data2->ptc_mask_l2 &&
+ data1->ptc_mask_l3 == data2->ptc_mask_l3 &&
+ data1->ptc_mask_l4 == data2->ptc_mask_l4 &&
+ data1->ptc_mask_tunnel == data2->ptc_mask_tunnel &&
+ data1->ptc_mask_l3_tunnel == data2->ptc_mask_l3_tunnel &&
+ data1->ptc_mask_l4_tunnel == data2->ptc_mask_l4_tunnel &&
+ data1->err_mask_ttl_tunnel == data2->err_mask_ttl_tunnel &&
+ data1->err_mask_ttl == data2->err_mask_ttl && data1->ip_prot == data2->ip_prot &&
+ data1->ip_prot_tunnel == data2->ip_prot_tunnel;
+}
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cat_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_CAT;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ int ref = db->cat[i].ref;
+
+ if (ref > 0 && hw_db_inline_cat_compare(data, &db->cat[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cat_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cat[idx.ids].ref = 1;
+ memcpy(&db->cat[idx.ids].data, data, sizeof(struct hw_db_inline_cat_data));
+
+ return idx;
+}
+
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cat[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cat[idx.ids].ref -= 1;
+
+ if (db->cat[idx.ids].ref <= 0) {
+ memset(&db->cat[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cat_data));
+ db->cat[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 0116af015d..38502ac1ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,12 +36,37 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_cat_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
+ HW_DB_IDX_TYPE_CAT,
};
/* Functionality data types */
+struct hw_db_inline_cat_data {
+ uint32_t vlan_mask : 4;
+ uint32_t mac_port_mask : 8;
+ uint32_t ptc_mask_frag : 4;
+ uint32_t ptc_mask_l2 : 7;
+ uint32_t ptc_mask_l3 : 3;
+ uint32_t ptc_mask_l4 : 5;
+ uint32_t padding0 : 1;
+
+ uint32_t ptc_mask_tunnel : 11;
+ uint32_t ptc_mask_l3_tunnel : 3;
+ uint32_t ptc_mask_l4_tunnel : 5;
+ uint32_t err_mask_ttl_tunnel : 2;
+ uint32_t err_mask_ttl : 2;
+ uint32_t padding1 : 9;
+
+ uint8_t ip_prot;
+ uint8_t ip_prot_tunnel;
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -70,6 +95,16 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ };
+ };
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -84,4 +119,16 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+/**/
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data);
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+
+/**/
+
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 07a2a12cc9..3ea4c73e24 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2346,6 +2350,67 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ struct hw_db_inline_action_set_data action_set_data = { 0 };
+ (void)action_set_data;
+
+ if (fd->jump_to_group != UINT32_MAX) {
+ /* Action Set only contains jump */
+ action_set_data.contains_jump = 1;
+ action_set_data.jump = fd->jump_to_group;
+
+ } else {
+ /* Action Set doesn't contain jump */
+ action_set_data.contains_jump = 0;
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = 0,
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
+ &cot_data);
+ fh->db_idxs[fh->db_idx_counter++] = cot_idx.raw;
+ action_set_data.cot = cot_idx;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
+
+ /* Setup CAT */
+ struct hw_db_inline_cat_data cat_data = {
+ .vlan_mask = (0xf << fd->vlans) & 0xf,
+ .mac_port_mask = 1 << fh->port_id,
+ .ptc_mask_frag = fd->fragmentation,
+ .ptc_mask_l2 = fd->l2_prot != -1 ? (1 << fd->l2_prot) : -1,
+ .ptc_mask_l3 = fd->l3_prot != -1 ? (1 << fd->l3_prot) : -1,
+ .ptc_mask_l4 = fd->l4_prot != -1 ? (1 << fd->l4_prot) : -1,
+ .err_mask_ttl = (fd->ttl_sub_enable &&
+ fd->ttl_sub_outer) ? -1 : 0x1,
+ .ptc_mask_tunnel = fd->tunnel_prot !=
+ -1 ? (1 << fd->tunnel_prot) : -1,
+ .ptc_mask_l3_tunnel =
+ fd->tunnel_l3_prot != -1 ? (1 << fd->tunnel_l3_prot) : -1,
+ .ptc_mask_l4_tunnel =
+ fd->tunnel_l4_prot != -1 ? (1 << fd->tunnel_l4_prot) : -1,
+ .err_mask_ttl_tunnel =
+ (fd->ttl_sub_enable && !fd->ttl_sub_outer) ? -1 : 0x1,
+ .ip_prot = fd->ip_prot,
+ .ip_prot_tunnel = fd->tunnel_ip_prot,
+ };
+ struct hw_db_cat_idx cat_idx =
+ hw_db_inline_cat_add(dev->ndev, dev->ndev->hw_db_handle, &cat_data);
+ fh->db_idxs[fh->db_idx_counter++] = cat_idx.raw;
+
+ if (cat_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference CAT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2378,6 +2443,20 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* COT is locked to CFN. Don't set color for CFN 0 */
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+
+ if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ /* Setup filter using matching all packets violating traffic policing parameters */
+ flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+
+ if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE,
+ NT_VIOLATING_MBR_QSL) < 0)
+ goto err_exit0;
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -2412,6 +2491,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PRESET_ALL, 0, 0, 0);
+ hw_mod_cat_cfn_flush(&ndev->be, 0, 1);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+ hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
hw_mod_tpe_reset(&ndev->be);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 27/86] net/ntnic: add SLC LR module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (25 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 26/86] net/ntnic: add cat module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 28/86] net/ntnic: add PDB module Serhii Iliushyk
` (59 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 104 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 19 ++++
.../profile_inline/flow_api_profile_inline.c | 37 ++++++-
5 files changed, 257 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 87fc16ecb4..2711f44083 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -697,6 +697,8 @@ int hw_mod_slc_lr_alloc(struct flow_api_backend_s *be);
void hw_mod_slc_lr_free(struct flow_api_backend_s *be);
int hw_mod_slc_lr_reset(struct flow_api_backend_s *be);
int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value);
struct pdb_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
index 1d878f3f96..30e5e38690 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
@@ -66,3 +66,103 @@ int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int co
return be->iface->slc_lr_rcp_flush(be->be_dev, &be->slc_lr, start_idx, count);
}
+
+static int hw_mod_slc_lr_rcp_mod(struct flow_api_backend_s *be, enum hw_slc_lr_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 2:
+ switch (field) {
+ case HW_SLC_LR_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->slc_lr.v2.rcp[index], (uint8_t)*value,
+ sizeof(struct hw_mod_slc_lr_v2_s));
+ break;
+
+ case HW_SLC_LR_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value, be->max_categories);
+ break;
+
+ case HW_SLC_LR_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].head_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].tail_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_PCAP:
+ GET_SET(be->slc_lr.v2.rcp[index].pcap, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_slc_lr_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index addd5f288f..b17bce3745 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,7 +20,13 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_slc_lr {
+ struct hw_db_inline_slc_lr_data data;
+ int ref;
+ } *slc_lr;
+
uint32_t nb_cot;
+ uint32_t nb_slc_lr;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -55,6 +61,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_slc_lr = ndev->be.max_categories;
+ db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
+
+ if (db->slc_lr == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -72,6 +86,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->slc_lr);
free(db->cat);
free(db->cfn);
@@ -95,6 +110,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_SLC_LR:
+ hw_db_inline_slc_lr_deref(ndev, db_handle,
+ *(struct hw_db_slc_lr_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -235,6 +255,90 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* SLC_LR */
+/******************************************************************************/
+
+static int hw_db_inline_slc_lr_compare(const struct hw_db_inline_slc_lr_data *data1,
+ const struct hw_db_inline_slc_lr_data *data2)
+{
+ if (!data1->head_slice_en)
+ return data1->head_slice_en == data2->head_slice_en;
+
+ return data1->head_slice_en == data2->head_slice_en &&
+ data1->head_slice_dyn == data2->head_slice_dyn &&
+ data1->head_slice_ofs == data2->head_slice_ofs;
+}
+
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_slc_lr_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_SLC_LR;
+
+ for (uint32_t i = 1; i < db->nb_slc_lr; ++i) {
+ int ref = db->slc_lr[i].ref;
+
+ if (ref > 0 && hw_db_inline_slc_lr_compare(data, &db->slc_lr[i].data)) {
+ idx.ids = i;
+ hw_db_inline_slc_lr_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->slc_lr[idx.ids].ref = 1;
+ memcpy(&db->slc_lr[idx.ids].data, data, sizeof(struct hw_db_inline_slc_lr_data));
+
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_SLC_EN, idx.ids, data->head_slice_en);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_DYN, idx.ids, data->head_slice_dyn);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_OFS, idx.ids, data->head_slice_ofs);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->slc_lr[idx.ids].ref += 1;
+}
+
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->slc_lr[idx.ids].ref -= 1;
+
+ if (db->slc_lr[idx.ids].ref <= 0) {
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->slc_lr[idx.ids].data, 0x0, sizeof(struct hw_db_inline_slc_lr_data));
+ db->slc_lr[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 38502ac1ec..ef63336b1c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -40,10 +40,15 @@ struct hw_db_cat_idx {
HW_DB_IDX;
};
+struct hw_db_slc_lr_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_SLC_LR,
};
/* Functionality data types */
@@ -89,6 +94,13 @@ struct hw_db_inline_cot_data {
uint32_t padding : 24;
};
+struct hw_db_inline_slc_lr_data {
+ uint32_t head_slice_en : 1;
+ uint32_t head_slice_dyn : 5;
+ uint32_t head_slice_ofs : 8;
+ uint32_t padding : 18;
+};
+
struct hw_db_inline_hsh_data {
uint32_t func;
uint64_t hash_mask;
@@ -119,6 +131,13 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data);
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 3ea4c73e24..d7491fce63 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2276,18 +2276,38 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
-static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
- const struct nic_flow_def *fd __rte_unused,
+static int setup_flow_flm_actions(struct flow_eth_dev *dev,
+ const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
- uint32_t local_idxs[] __rte_unused,
- uint32_t *local_idx_counter __rte_unused,
+ uint32_t local_idxs[],
+ uint32_t *local_idx_counter,
uint16_t *flm_rpl_ext_ptr __rte_unused,
uint32_t *flm_ft __rte_unused,
uint32_t *flm_scrub __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error)
{
+ /* Setup SLC LR */
+ struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
+
+ if (fd->header_strip_end_dyn != 0 || fd->header_strip_end_ofs != 0) {
+ struct hw_db_inline_slc_lr_data slc_lr_data = {
+ .head_slice_en = 1,
+ .head_slice_dyn = fd->header_strip_end_dyn,
+ .head_slice_ofs = fd->header_strip_end_ofs,
+ };
+ slc_lr_idx =
+ hw_db_inline_slc_lr_add(dev->ndev, dev->ndev->hw_db_handle, &slc_lr_data);
+ local_idxs[(*local_idx_counter)++] = slc_lr_idx.raw;
+
+ if (slc_lr_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference SLC LR resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -2449,6 +2469,9 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* SLC LR index 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2497,6 +2520,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
+
hw_mod_tpe_reset(&ndev->be);
flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 28/86] net/ntnic: add PDB module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (26 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 27/86] net/ntnic: add SLC LR module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 29/86] net/ntnic: add QSL module Serhii Iliushyk
` (58 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Packet Description Builder module creates packet meta-data
for example virtio-net headers.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 17 +++
3 files changed, 164 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 2711f44083..7f1449d8ee 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -740,6 +740,9 @@ int hw_mod_pdb_alloc(struct flow_api_backend_s *be);
void hw_mod_pdb_free(struct flow_api_backend_s *be);
int hw_mod_pdb_reset(struct flow_api_backend_s *be);
int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value);
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be);
struct tpe_func_s {
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
index c3facacb08..59285405ba 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
@@ -85,6 +85,150 @@ int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->pdb_rcp_flush(be->be_dev, &be->pdb, start_idx, count);
}
+static int hw_mod_pdb_rcp_mod(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 9:
+ switch (field) {
+ case HW_PDB_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->pdb.v9.rcp[index], (uint8_t)*value,
+ sizeof(struct pdb_v9_rcp_s));
+ break;
+
+ case HW_PDB_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value,
+ be->pdb.nb_pdb_rcp_categories);
+ break;
+
+ case HW_PDB_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value);
+ break;
+
+ case HW_PDB_RCP_DESCRIPTOR:
+ GET_SET(be->pdb.v9.rcp[index].descriptor, value);
+ break;
+
+ case HW_PDB_RCP_DESC_LEN:
+ GET_SET(be->pdb.v9.rcp[index].desc_len, value);
+ break;
+
+ case HW_PDB_RCP_TX_PORT:
+ GET_SET(be->pdb.v9.rcp[index].tx_port, value);
+ break;
+
+ case HW_PDB_RCP_TX_IGNORE:
+ GET_SET(be->pdb.v9.rcp[index].tx_ignore, value);
+ break;
+
+ case HW_PDB_RCP_TX_NOW:
+ GET_SET(be->pdb.v9.rcp[index].tx_now, value);
+ break;
+
+ case HW_PDB_RCP_CRC_OVERWRITE:
+ GET_SET(be->pdb.v9.rcp[index].crc_overwrite, value);
+ break;
+
+ case HW_PDB_RCP_ALIGN:
+ GET_SET(be->pdb.v9.rcp[index].align, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs0_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs0_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs1_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs1_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs2_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs2_rel, value);
+ break;
+
+ case HW_PDB_RCP_IP_PROT_TNL:
+ GET_SET(be->pdb.v9.rcp[index].ip_prot_tnl, value);
+ break;
+
+ case HW_PDB_RCP_PPC_HSH:
+ GET_SET(be->pdb.v9.rcp[index].ppc_hsh, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_EN:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_en, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_BIT:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_bit, value);
+ break;
+
+ case HW_PDB_RCP_PCAP_KEEP_FCS:
+ GET_SET(be->pdb.v9.rcp[index].pcap_keep_fcs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 9 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_pdb_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be)
{
return be->iface->pdb_config_flush(be->be_dev, &be->pdb);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d7491fce63..405d87f632 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2472,6 +2472,19 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ /* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
+ */
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESCRIPTOR, 0, 7) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESC_LEN, 0, 6) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2529,6 +2542,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+ hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_PRESET_ALL, 0, 0);
+ hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 29/86] net/ntnic: add QSL module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (27 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 28/86] net/ntnic: add PDB module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 30/86] net/ntnic: add KM module Serhii Iliushyk
` (57 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Queue Selector module directs packets to a given destination
which includes host queues, physical ports, exceptions paths, and discard.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/hw_mod_backend.h | 8 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 65 ++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 96 +++++++-
7 files changed, 595 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 7f031ccda8..edffd0a57a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -184,8 +184,11 @@ extern const char *dbg_res_descr[];
int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
uint32_t alignment);
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment);
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
#endif
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7f1449d8ee..6fa2a3d94f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -666,8 +666,16 @@ int hw_mod_qsl_alloc(struct flow_api_backend_s *be);
void hw_mod_qsl_free(struct flow_api_backend_s *be);
int hw_mod_qsl_reset(struct flow_api_backend_s *be);
int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value);
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_qsl_unmq_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
uint32_t value);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 2aee2ee973..a51d621ef9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -106,11 +106,52 @@ int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return -1;
}
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment)
+{
+ unsigned int idx_offs;
+
+ for (unsigned int res_idx = 0; res_idx < ndev->res[res_type].resource_count - (num - 1);
+ res_idx += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, res_idx)) {
+ for (idx_offs = 1; idx_offs < num; idx_offs++)
+ if (flow_nic_is_resource_used(ndev, res_type, res_idx + idx_offs))
+ break;
+
+ if (idx_offs < num)
+ continue;
+
+ /* found a contiguous number of "num" res_type elements - allocate them */
+ for (idx_offs = 0; idx_offs < num; idx_offs++) {
+ flow_nic_mark_resource_used(ndev, res_type, res_idx + idx_offs);
+ ndev->res[res_type].ref[res_idx + idx_offs] = 1;
+ }
+
+ return res_idx;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
}
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
+{
+ NT_LOG(DBG, FILTER, "Reference resource %s idx %i (before ref cnt %i)",
+ dbg_res_descr[res_type], index, ndev->res[res_type].ref[index]);
+ assert(flow_nic_is_resource_used(ndev, res_type, index));
+
+ if (ndev->res[res_type].ref[index] == (uint32_t)-1)
+ return -1;
+
+ ndev->res[res_type].ref[index]++;
+ return 0;
+}
+
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
{
NT_LOG(DBG, FILTER, "De-reference resource %s idx %i (before ref cnt %i)",
@@ -348,6 +389,18 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 0);
hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1);
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (int i = 0; i < eth_dev->num_queues; ++i) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value & ~(1U << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
#ifdef FLOW_DEBUG
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
@@ -580,6 +633,18 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->rss_target_id = -1;
+ if (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (i = 0; i < eth_dev->num_queues; i++) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value | (1 << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
*rss_target_id = eth_dev->rss_target_id;
nic_insert_eth_port_dev(ndev, eth_dev);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
index 93b37d595e..70fe97a298 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
@@ -104,6 +104,114 @@ int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_rcp_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_rcp_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.rcp[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_rcp_s));
+ break;
+
+ case HW_QSL_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value,
+ be->qsl.nb_rcp_categories);
+ break;
+
+ case HW_QSL_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value);
+ break;
+
+ case HW_QSL_RCP_DISCARD:
+ GET_SET(be->qsl.v7.rcp[index].discard, value);
+ break;
+
+ case HW_QSL_RCP_DROP:
+ GET_SET(be->qsl.v7.rcp[index].drop, value);
+ break;
+
+ case HW_QSL_RCP_TBL_LO:
+ GET_SET(be->qsl.v7.rcp[index].tbl_lo, value);
+ break;
+
+ case HW_QSL_RCP_TBL_HI:
+ GET_SET(be->qsl.v7.rcp[index].tbl_hi, value);
+ break;
+
+ case HW_QSL_RCP_TBL_IDX:
+ GET_SET(be->qsl.v7.rcp[index].tbl_idx, value);
+ break;
+
+ case HW_QSL_RCP_TBL_MSK:
+ GET_SET(be->qsl.v7.rcp[index].tbl_msk, value);
+ break;
+
+ case HW_QSL_RCP_LR:
+ GET_SET(be->qsl.v7.rcp[index].lr, value);
+ break;
+
+ case HW_QSL_RCP_TSA:
+ GET_SET(be->qsl.v7.rcp[index].tsa, value);
+ break;
+
+ case HW_QSL_RCP_VLI:
+ GET_SET(be->qsl.v7.rcp[index].vli, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -117,6 +225,73 @@ int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qst_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qst_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_qst_entries) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.qst[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_qst_s));
+ break;
+
+ case HW_QSL_QST_QUEUE:
+ GET_SET(be->qsl.v7.qst[index].queue, value);
+ break;
+
+ case HW_QSL_QST_EN:
+ GET_SET(be->qsl.v7.qst[index].en, value);
+ break;
+
+ case HW_QSL_QST_TX_PORT:
+ GET_SET(be->qsl.v7.qst[index].tx_port, value);
+ break;
+
+ case HW_QSL_QST_LRE:
+ GET_SET(be->qsl.v7.qst[index].lre, value);
+ break;
+
+ case HW_QSL_QST_TCI:
+ GET_SET(be->qsl.v7.qst[index].tci, value);
+ break;
+
+ case HW_QSL_QST_VEN:
+ GET_SET(be->qsl.v7.qst[index].ven, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -130,6 +305,49 @@ int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qen_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qen_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= QSL_QEN_ENTRIES) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QEN_EN:
+ GET_SET(be->qsl.v7.qen[index].en, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, &value, 0);
+}
+
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, value, 1);
+}
+
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b17bce3745..5572662647 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,12 +20,18 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_qsl {
+ struct hw_db_inline_qsl_data data;
+ int qst_idx;
+ } *qsl;
+
struct hw_db_inline_resource_db_slc_lr {
struct hw_db_inline_slc_lr_data data;
int ref;
} *slc_lr;
uint32_t nb_cot;
+ uint32_t nb_qsl;
uint32_t nb_slc_lr;
/* Items */
@@ -61,6 +67,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_qsl = ndev->be.qsl.nb_rcp_categories;
+ db->qsl = calloc(db->nb_qsl, sizeof(struct hw_db_inline_resource_db_qsl));
+
+ if (db->qsl == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_slc_lr = ndev->be.max_categories;
db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
@@ -86,6 +100,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->qsl);
free(db->slc_lr);
free(db->cat);
@@ -110,6 +125,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_QSL:
+ hw_db_inline_qsl_deref(ndev, db_handle, *(struct hw_db_qsl_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_SLC_LR:
hw_db_inline_slc_lr_deref(ndev, db_handle,
*(struct hw_db_slc_lr_idx *)&idxs[i]);
@@ -145,6 +164,13 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
+ /* QSL for traffic policing */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_hw_id, 0x3) < 0)
+ return -1;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, qsl_hw_id, 1) < 0)
+ return -1;
+
/* Select and enable QSL recipe */
if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
return -1;
@@ -255,6 +281,175 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* QSL */
+/******************************************************************************/
+
+/* Calculate queue mask for QSL TBL_MSK for given number of queues.
+ * NOTE: If number of queues is not power of two, then queue mask will be created
+ * for nearest smaller power of two.
+ */
+static uint32_t queue_mask(uint32_t nr_queues)
+{
+ nr_queues |= nr_queues >> 1;
+ nr_queues |= nr_queues >> 2;
+ nr_queues |= nr_queues >> 4;
+ nr_queues |= nr_queues >> 8;
+ nr_queues |= nr_queues >> 16;
+ return nr_queues >> 1;
+}
+
+static int hw_db_inline_qsl_compare(const struct hw_db_inline_qsl_data *data1,
+ const struct hw_db_inline_qsl_data *data2)
+{
+ if (data1->discard != data2->discard || data1->drop != data2->drop ||
+ data1->table_size != data2->table_size || data1->retransmit != data2->retransmit) {
+ return 0;
+ }
+
+ for (int i = 0; i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ if (data1->table[i].queue != data2->table[i].queue ||
+ data1->table[i].queue_en != data2->table[i].queue_en ||
+ data1->table[i].tx_port != data2->table[i].tx_port ||
+ data1->table[i].tx_port_en != data2->table[i].tx_port_en) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_qsl_idx qsl_idx = { .raw = 0 };
+ uint32_t qst_idx = 0;
+ int res;
+
+ qsl_idx.type = HW_DB_IDX_TYPE_QSL;
+
+ if (data->discard) {
+ qsl_idx.ids = 0;
+ return qsl_idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_qsl; ++i) {
+ if (hw_db_inline_qsl_compare(data, &db->qsl[i].data)) {
+ qsl_idx.ids = i;
+ hw_db_inline_qsl_ref(ndev, db, qsl_idx);
+ return qsl_idx;
+ }
+ }
+
+ res = flow_nic_alloc_resource(ndev, RES_QSL_RCP, 1);
+
+ if (res < 0) {
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qsl_idx.ids = res & 0xff;
+
+ if (data->table_size > 0) {
+ res = flow_nic_alloc_resource_config(ndev, RES_QSL_QST, data->table_size, 1);
+
+ if (res < 0) {
+ flow_nic_deref_resource(ndev, RES_QSL_RCP, qsl_idx.ids);
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qst_idx = (uint32_t)res;
+ }
+
+ memcpy(&db->qsl[qsl_idx.ids].data, data, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[qsl_idx.ids].qst_idx = qst_idx;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, qsl_idx.ids, 0x0);
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, qsl_idx.ids, data->discard);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_idx.ids, data->drop * 0x3);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_LR, qsl_idx.ids, data->retransmit * 0x3);
+
+ if (data->table_size == 0) {
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, 0x0);
+
+ } else {
+ const uint32_t table_start = qst_idx;
+ const uint32_t table_end = table_start + data->table_size - 1;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, table_end);
+
+ /* Toeplitz hash function uses TBL_IDX and TBL_MSK. */
+ uint32_t msk = queue_mask(table_end - table_start + 1);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, msk);
+
+ for (uint32_t i = 0; i < data->table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, table_start + i, 0x0);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_TX_PORT, table_start + i,
+ data->table[i].tx_port);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_LRE, table_start + i,
+ data->table[i].tx_port_en);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_QUEUE, table_start + i,
+ data->table[i].queue);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_EN, table_start + i,
+ data->table[i].queue_en);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, data->table_size);
+ }
+
+ hw_mod_qsl_rcp_flush(&ndev->be, qsl_idx.ids, 1);
+
+ return qsl_idx;
+}
+
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ (void)db_handle;
+
+ if (!idx.error && idx.ids != 0)
+ flow_nic_ref_resource(ndev, RES_QSL_RCP, idx.ids);
+}
+
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error || idx.ids == 0)
+ return;
+
+ if (flow_nic_deref_resource(ndev, RES_QSL_RCP, idx.ids) == 0) {
+ const int table_size = (int)db->qsl[idx.ids].data.table_size;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_qsl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ if (table_size > 0) {
+ const int table_start = db->qsl[idx.ids].qst_idx;
+
+ for (int i = 0; i < (int)table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL,
+ table_start + i, 0x0);
+ flow_nic_free_resource(ndev, RES_QSL_QST, table_start + i);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, table_size);
+ }
+
+ memset(&db->qsl[idx.ids].data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[idx.ids].qst_idx = 0;
+ }
+}
+
/******************************************************************************/
/* SLC_LR */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index ef63336b1c..d0435acaef 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,6 +36,10 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_qsl_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cat_idx {
HW_DB_IDX;
};
@@ -48,6 +52,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
};
@@ -113,6 +118,7 @@ struct hw_db_inline_action_set_data {
int jump;
struct {
struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
};
};
};
@@ -131,6 +137,11 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data);
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+
struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_slc_lr_data *data);
void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 405d87f632..999b1ed985 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2276,9 +2276,55 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
+
+static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_data *qsl_data,
+ uint32_t num_dest_port, uint32_t num_queues)
+{
+ memset(qsl_data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+
+ if (fd->dst_num_avail <= 0) {
+ qsl_data->drop = 1;
+
+ } else {
+ assert(fd->dst_num_avail < HW_DB_INLINE_MAX_QST_PER_QSL);
+
+ uint32_t ports[fd->dst_num_avail];
+ uint32_t queues[fd->dst_num_avail];
+
+ uint32_t port_index = 0;
+ uint32_t queue_index = 0;
+ uint32_t max = num_dest_port > num_queues ? num_dest_port : num_queues;
+
+ memset(ports, 0, fd->dst_num_avail);
+ memset(queues, 0, fd->dst_num_avail);
+
+ qsl_data->table_size = max;
+ qsl_data->retransmit = num_dest_port > 0 ? 1 : 0;
+
+ for (int i = 0; i < fd->dst_num_avail; ++i)
+ if (fd->dst_id[i].type == PORT_PHY)
+ ports[port_index++] = fd->dst_id[i].id;
+
+ else if (fd->dst_id[i].type == PORT_VIRT)
+ queues[queue_index++] = fd->dst_id[i].id;
+
+ for (uint32_t i = 0; i < max; ++i) {
+ if (num_dest_port > 0) {
+ qsl_data->table[i].tx_port = ports[i % num_dest_port];
+ qsl_data->table[i].tx_port_en = 1;
+ }
+
+ if (num_queues > 0) {
+ qsl_data->table[i].queue = queues[i % num_queues];
+ qsl_data->table[i].queue_en = 1;
+ }
+ }
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
- const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
uint32_t local_idxs[],
@@ -2288,6 +2334,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
+ local_idxs[(*local_idx_counter)++] = qsl_idx.raw;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2328,6 +2385,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
fh->caller_id = caller_id;
struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
@@ -2398,6 +2456,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle,
+ &qsl_data);
+ fh->db_idxs[fh->db_idx_counter++] = qsl_idx.raw;
+ action_set_data.qsl = qsl_idx;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2469,6 +2540,24 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* Initialize QSL with unmatched recipe index 0 - discard */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, 0, 0x1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, 0);
+
+ /* Initialize QST with default index 0 */
+ if (hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, 0, 0x0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_qst_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
+
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
@@ -2487,6 +2576,7 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
NT_FLM_VIOLATING_MBR_FLOW_TYPE,
@@ -2533,6 +2623,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, 0, 0);
+ hw_mod_qsl_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_QSL_RCP, 0);
+
hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 30/86] net/ntnic: add KM module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (28 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 29/86] net/ntnic: add QSL module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 31/86] net/ntnic: add hash API Serhii Iliushyk
` (56 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Key Matcher module checks the values of individual fields of a packet.
It supports both exact match which is implemented with a CAM,
and wildcards which is implemented with a TCAM.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 110 +-
drivers/net/ntnic/include/hw_mod_backend.h | 64 +-
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1065 +++++++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++
.../profile_inline/flow_api_hw_db_inline.h | 38 +
.../profile_inline/flow_api_profile_inline.c | 162 +++
7 files changed, 2024 insertions(+), 29 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b1d39b919b..a0f02f4e8a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -52,34 +52,32 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_WORD_NUM 24
+#define MAX_BANKS 6
+
+#define MAX_TCAM_START_OFFSETS 4
+
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
/*
- * Tunnel encapsulation header definition
+ * 128 128 32 32 32
+ * Have | QW0 || QW4 || SW8 || SW9 | SWX in FPGA
+ *
+ * Each word may start at any offset, though
+ * they are combined in chronological order, with all enabled to
+ * build the extracted match data, thus that is how the match key
+ * must be build
*/
-#define MAX_TUN_HDR_SIZE 128
-struct tunnel_header_s {
- union {
- uint8_t hdr8[MAX_TUN_HDR_SIZE];
- uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
- } d;
- uint32_t user_port_id;
- uint8_t len;
-
- uint8_t nb_vlans;
-
- uint8_t ip_version; /* 4: v4, 6: v6 */
- uint16_t ip_csum_precalc;
-
- uint8_t new_outer;
- uint8_t l2_len;
- uint8_t l3_len;
- uint8_t l4_len;
+enum extractor_e {
+ KM_USE_EXTRACTOR_UNDEF,
+ KM_USE_EXTRACTOR_QWORD,
+ KM_USE_EXTRACTOR_SWORD,
};
struct match_elem_s {
+ enum extractor_e extr;
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
uint32_t e_mask[4];
@@ -89,16 +87,76 @@ struct match_elem_s {
uint32_t word_len;
};
+enum cam_tech_use_e {
+ KM_CAM,
+ KM_TCAM,
+ KM_SYNERGY
+};
+
struct km_flow_def_s {
struct flow_api_backend_s *be;
+ /* For keeping track of identical entries */
+ struct km_flow_def_s *reference;
+ struct km_flow_def_s *root;
+
/* For collect flow elements and sorting */
struct match_elem_s match[MAX_MATCH_FIELDS];
+ struct match_elem_s *match_map[MAX_MATCH_FIELDS];
int num_ftype_elem;
+ /* Finally formatted CAM/TCAM entry */
+ enum cam_tech_use_e target;
+ uint32_t entry_word[MAX_WORD_NUM];
+ uint32_t entry_mask[MAX_WORD_NUM];
+ int key_word_size;
+
+ /* TCAM calculated possible bank start offsets */
+ int start_offsets[MAX_TCAM_START_OFFSETS];
+ int num_start_offsets;
+
/* Flow information */
/* HW input port ID needed for compare. In port must be identical on flow types */
uint32_t port_id;
+ uint32_t info; /* used for color (actions) */
+ int info_set;
+ int flow_type; /* 0 is illegal and used as unset */
+ int flushed_to_target; /* if this km entry has been finally programmed into NIC hw */
+
+ /* CAM specific bank management */
+ int cam_paired;
+ int record_indexes[MAX_BANKS];
+ int bank_used;
+ uint32_t *cuckoo_moves; /* for CAM statistics only */
+ struct cam_distrib_s *cam_dist;
+
+ /* TCAM specific bank management */
+ struct tcam_distrib_s *tcam_dist;
+ int tcam_start_bank;
+ int tcam_record;
+};
+
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
};
enum flow_port_type_e {
@@ -247,11 +305,25 @@ struct flow_handle {
};
};
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
uint32_t word_len, enum frame_offs_e start, int8_t offset);
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id);
+/*
+ * Compares 2 KM key definitions after first collect validate and optimization.
+ * km is compared against an existing km1.
+ * if identical, km1 flow_type is returned
+ */
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1);
+
+int km_rcp_set(struct km_flow_def_s *km, int index);
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color);
+int km_clear_data_match_entry(struct km_flow_def_s *km);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6fa2a3d94f..26903f2183 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -132,6 +132,22 @@ static inline int is_non_zero(const void *addr, size_t n)
return 0;
}
+/* Sideband info bit indicator */
+#define SWX_INFO (1 << 6)
+
+enum km_flm_if_select_e {
+ KM_FLM_IF_FIRST = 0,
+ KM_FLM_IF_SECOND = 1
+};
+
+#define FIELD_START_INDEX 100
+
+#define COMMON_FUNC_INFO_S \
+ int ver; \
+ void *base; \
+ unsigned int alloced_size; \
+ int debug
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
@@ -141,22 +157,39 @@ enum frame_offs_e {
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ SB_VNI = SWX_INFO | 1,
+ SB_MAC_PORT = SWX_INFO | 2,
+ SB_KCC_ID = SWX_INFO | 3
};
-/* Sideband info bit indicator */
+enum {
+ QW0_SEL_EXCLUDE = 0,
+ QW0_SEL_FIRST32 = 1,
+ QW0_SEL_FIRST64 = 3,
+ QW0_SEL_ALL128 = 4,
+};
-enum km_flm_if_select_e {
- KM_FLM_IF_FIRST = 0,
- KM_FLM_IF_SECOND = 1
+enum {
+ QW4_SEL_EXCLUDE = 0,
+ QW4_SEL_FIRST32 = 1,
+ QW4_SEL_FIRST64 = 2,
+ QW4_SEL_ALL128 = 3,
};
-#define FIELD_START_INDEX 100
+enum {
+ DW8_SEL_EXCLUDE = 0,
+ DW8_SEL_FIRST32 = 3,
+};
-#define COMMON_FUNC_INFO_S \
- int ver; \
- void *base; \
- unsigned int alloced_size; \
- int debug
+enum {
+ DW10_SEL_EXCLUDE = 0,
+ DW10_SEL_FIRST32 = 2,
+};
+
+enum {
+ SWX_SEL_EXCLUDE = 0,
+ SWX_SEL_ALL32 = 1,
+};
enum {
PROT_OTHER = 0,
@@ -440,13 +473,24 @@ int hw_mod_km_alloc(struct flow_api_backend_s *be);
void hw_mod_km_free(struct flow_api_backend_s *be);
int hw_mod_km_reset(struct flow_api_backend_s *be);
int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value);
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value);
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count);
int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
int byte_val, uint32_t *value_set);
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set);
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 237e9f7b4e..30d6ea728e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -10,8 +10,34 @@
#include "flow_api_engine.h"
#include "nt_util.h"
+#define MAX_QWORDS 2
+#define MAX_SWORDS 2
+
+#define CUCKOO_MOVE_MAX_DEPTH 8
+
#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+#define CAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_cam_records + (rec))
+#define CAM_KM_DIST_IDX(bnk) \
+ ({ \
+ int _temp_bnk = (bnk); \
+ CAM_DIST_IDX(_temp_bnk, km->record_indexes[_temp_bnk]); \
+ })
+
+#define TCAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_tcam_bank_width + (rec))
+
+#define CAM_ENTRIES \
+ (km->be->km.nb_cam_banks * km->be->km.nb_cam_records * sizeof(struct cam_distrib_s))
+#define TCAM_ENTRIES \
+ (km->be->km.nb_tcam_bank_width * km->be->km.nb_tcam_banks * sizeof(struct tcam_distrib_s))
+
+/*
+ * CAM structures and defines
+ */
+struct cam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
static const struct cam_match_masks_s {
uint32_t word_len;
uint32_t key_mask[4];
@@ -36,6 +62,25 @@ static const struct cam_match_masks_s {
{ 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
};
+static int cam_addr_reserved_stack[CUCKOO_MOVE_MAX_DEPTH];
+
+/*
+ * TCAM structures and defines
+ */
+struct tcam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
+static int tcam_find_mapping(struct km_flow_def_s *km);
+
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
+{
+ km->cam_dist = (struct cam_distrib_s *)*handle;
+ km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
+ km->tcam_dist =
+ (struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+}
+
void km_free_ndev_resource_management(void **handle)
{
if (*handle) {
@@ -98,3 +143,1023 @@ int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_m
km->num_ftype_elem++;
return 0;
}
+
+static int get_word(struct km_flow_def_s *km, uint32_t size, int marked[])
+{
+ for (int i = 0; i < km->num_ftype_elem; i++)
+ if (!marked[i] && !(km->match[i].extr_start_offs_id & SWX_INFO) &&
+ km->match[i].word_len == size)
+ return i;
+
+ return -1;
+}
+
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id)
+{
+ /*
+ * Create combined extractor mappings
+ * if key fields may be changed to cover un-mappable otherwise?
+ * split into cam and tcam and use synergy mode when available
+ */
+ int match_marked[MAX_MATCH_FIELDS];
+ int idx = 0;
+ int next = 0;
+ int m_idx;
+ int size;
+
+ memset(match_marked, 0, sizeof(match_marked));
+
+ /* build QWords */
+ for (int qwords = 0; qwords < MAX_QWORDS; qwords++) {
+ size = 4;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 2;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 1;
+ m_idx = get_word(km, 1, match_marked);
+ }
+ }
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_QWORD;
+
+ /* build final entry words and mask array */
+ for (int i = 0; i < size; i++) {
+ km->entry_word[idx + i] = km->match[m_idx].e_word[i];
+ km->entry_mask[idx + i] = km->match[m_idx].e_mask[i];
+ }
+
+ idx += size;
+ next++;
+ }
+
+ m_idx = get_word(km, 4, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more QWords */
+ return -1;
+ }
+
+ /*
+ * On km v6+ we have DWORDs here instead. However, we only use them as SWORDs for now
+ * No match would be able to exploit these as DWORDs because of maximum length of 12 words
+ * in CAM The last 2 words are taken by KCC-ID/SWX and Color. You could have one or none
+ * QWORDs where then both these DWORDs were possible in 10 words, but we don't have such
+ * use case built in yet
+ */
+ /* build SWords */
+ for (int swords = 0; swords < MAX_SWORDS; swords++) {
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_SWORD;
+
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[m_idx].e_word[0];
+ km->entry_mask[idx] = km->match[m_idx].e_mask[0];
+ idx++;
+ next++;
+ }
+
+ /*
+ * Make sure we took them all
+ */
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more SWords */
+ return -1;
+ }
+
+ /*
+ * Handle SWX words specially
+ */
+ int swx_found = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id & SWX_INFO) {
+ km->match_map[next] = &km->match[i];
+ km->match[i].extr = KM_USE_EXTRACTOR_SWORD;
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[i].e_word[0];
+ km->entry_mask[idx] = km->match[i].e_mask[0];
+ idx++;
+ next++;
+ swx_found = 1;
+ }
+ }
+
+ assert(next == km->num_ftype_elem);
+
+ km->key_word_size = idx;
+ km->port_id = port_id;
+
+ km->target = KM_CAM;
+
+ /*
+ * Finally decide if we want to put this match->action into the TCAM
+ * When SWX word used we need to put it into CAM always, no matter what mask pattern
+ * Later, when synergy mode is applied, we can do a split
+ */
+ if (!swx_found && km->key_word_size <= 6) {
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match_map[i]->masked_for_tcam) {
+ /* At least one */
+ km->target = KM_TCAM;
+ }
+ }
+ }
+
+ NT_LOG(DBG, FILTER, "This flow goes into %s", (km->target == KM_TCAM) ? "TCAM" : "CAM");
+
+ if (km->target == KM_TCAM) {
+ if (km->key_word_size > 10) {
+ /* do not support SWX in TCAM */
+ return -1;
+ }
+
+ /*
+ * adjust for unsupported key word size in TCAM
+ */
+ if ((km->key_word_size == 5 || km->key_word_size == 7 || km->key_word_size == 9)) {
+ km->entry_mask[km->key_word_size] = 0;
+ km->key_word_size++;
+ }
+
+ /*
+ * 1. the fact that the length of a key cannot change among the same used banks
+ *
+ * calculate possible start indexes
+ * unfortunately restrictions in TCAM lookup
+ * makes it hard to handle key lengths larger than 6
+ * when other sizes should be possible too
+ */
+ switch (km->key_word_size) {
+ case 1:
+ for (int i = 0; i < 4; i++)
+ km->start_offsets[0] = 8 + i;
+
+ km->num_start_offsets = 4;
+ break;
+
+ case 2:
+ km->start_offsets[0] = 6;
+ km->num_start_offsets = 1;
+ break;
+
+ case 3:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 4:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 6:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Final Key word size too large: %i",
+ km->key_word_size);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1)
+{
+ if (km->target != km1->target || km->num_ftype_elem != km1->num_ftype_elem ||
+ km->key_word_size != km1->key_word_size || km->info_set != km1->info_set)
+ return 0;
+
+ /*
+ * before KCC-CAM:
+ * if port is added to match, then we can have different ports in CAT
+ * that reuses this flow type
+ */
+ int port_match_included = 0, kcc_swx_used = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id == SB_MAC_PORT) {
+ port_match_included = 1;
+ break;
+ }
+
+ if (km->match_map[i]->extr_start_offs_id == SB_KCC_ID) {
+ kcc_swx_used = 1;
+ break;
+ }
+ }
+
+ /*
+ * If not using KCC and if port match is not included in CAM,
+ * we need to have same port_id to reuse
+ */
+ if (!kcc_swx_used && !port_match_included && km->port_id != km1->port_id)
+ return 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ /* using same extractor types in same sequence */
+ if (km->match_map[i]->extr_start_offs_id !=
+ km1->match_map[i]->extr_start_offs_id ||
+ km->match_map[i]->rel_offs != km1->match_map[i]->rel_offs ||
+ km->match_map[i]->extr != km1->match_map[i]->extr ||
+ km->match_map[i]->word_len != km1->match_map[i]->word_len) {
+ return 0;
+ }
+ }
+
+ if (km->target == KM_CAM) {
+ /* in CAM must exactly match on all masks */
+ for (int i = 0; i < km->key_word_size; i++)
+ if (km->entry_mask[i] != km1->entry_mask[i])
+ return 0;
+
+ /* Would be set later if not reusing from km1 */
+ km->cam_paired = km1->cam_paired;
+
+ } else if (km->target == KM_TCAM) {
+ /*
+ * If TCAM, we must make sure Recipe Key Mask does not
+ * mask out enable bits in masks
+ * Note: it is important that km1 is the original creator
+ * of the KM Recipe, since it contains its true masks
+ */
+ for (int i = 0; i < km->key_word_size; i++)
+ if ((km->entry_mask[i] & km1->entry_mask[i]) != km->entry_mask[i])
+ return 0;
+
+ km->tcam_start_bank = km1->tcam_start_bank;
+ km->tcam_record = -1; /* needs to be found later */
+
+ } else {
+ NT_LOG(DBG, FILTER, "ERROR - KM target not defined or supported");
+ return 0;
+ }
+
+ /*
+ * Check for a flow clash. If already programmed return with -1
+ */
+ int double_match = 1;
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ if ((km->entry_word[i] & km->entry_mask[i]) !=
+ (km1->entry_word[i] & km1->entry_mask[i])) {
+ double_match = 0;
+ break;
+ }
+ }
+
+ if (double_match)
+ return -1;
+
+ /*
+ * Note that TCAM and CAM may reuse same RCP and flow type
+ * when this happens, CAM entry wins on overlap
+ */
+
+ /* Use same KM Recipe and same flow type - return flow type */
+ return km1->flow_type;
+}
+
+int km_rcp_set(struct km_flow_def_s *km, int index)
+{
+ int qw = 0;
+ int sw = 0;
+ int swx = 0;
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PRESET_ALL, index, 0, 0);
+
+ /* set extractor words, offs, contrib */
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ switch (km->match_map[i]->extr) {
+ case KM_USE_EXTRACTOR_SWORD:
+ if (km->match_map[i]->extr_start_offs_id & SWX_INFO) {
+ if (km->target == KM_CAM && swx == 0) {
+ /* SWX */
+ if (km->match_map[i]->extr_start_offs_id == SB_VNI) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - VNI");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_MAC_PORT) {
+ NT_LOG(DBG, FILTER,
+ "Set KM SWX sel A - PTC + MAC");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_KCC_ID) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - KCC ID");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ return -1;
+ }
+
+ swx++;
+
+ } else {
+ if (sw == 0) {
+ /* DW8 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_SEL_A, index, 0,
+ DW8_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW8 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else if (sw == 1) {
+ /* DW10 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_SEL_A, index, 0,
+ DW10_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW10 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else {
+ return -1;
+ }
+
+ sw++;
+ }
+
+ break;
+
+ case KM_USE_EXTRACTOR_QWORD:
+ if (qw == 0) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW0 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else if (qw == 1) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW4 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else {
+ return -1;
+ }
+
+ qw++;
+ break;
+
+ default:
+ return -1;
+ }
+ }
+
+ /* set mask A */
+ for (int i = 0; i < km->key_word_size; i++) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_MASK_A, index,
+ (km->be->km.nb_km_rcp_mask_a_word_size - 1) - i,
+ km->entry_mask[i]);
+ NT_LOG(DBG, FILTER, "Set KM mask A: %08x", km->entry_mask[i]);
+ }
+
+ if (km->target == KM_CAM) {
+ /* set info - Color */
+ if (km->info_set) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_INFO_A, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM info A");
+ }
+
+ /* set key length A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_EL_A, index, 0,
+ km->key_word_size + !!km->info_set - 1); /* select id is -1 */
+ /* set Flow Type for Key A */
+ NT_LOG(DBG, FILTER, "Set KM EL A: %i", km->key_word_size + !!km->info_set - 1);
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_FTM_A, index, 0, 1 << km->flow_type);
+
+ NT_LOG(DBG, FILTER, "Set KM FTM A - ft: %i", km->flow_type);
+
+ /* Set Paired - only on the CAM part though... TODO split CAM and TCAM */
+ if ((uint32_t)(km->key_word_size + !!km->info_set) >
+ km->be->km.nb_cam_record_words) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PAIRED, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM CAM Paired");
+ km->cam_paired = 1;
+ }
+
+ } else if (km->target == KM_TCAM) {
+ uint32_t bank_bm = 0;
+
+ if (tcam_find_mapping(km) < 0) {
+ /* failed mapping into TCAM */
+ NT_LOG(DBG, FILTER, "INFO: TCAM mapping flow failed");
+ return -1;
+ }
+
+ assert((uint32_t)(km->tcam_start_bank + km->key_word_size) <=
+ km->be->km.nb_tcam_banks);
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ bank_bm |=
+ (1 << (km->be->km.nb_tcam_banks - 1 - (km->tcam_start_bank + i)));
+ }
+
+ /* Set BANK_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_BANK_A, index, 0, bank_bm);
+ /* Set Kl_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_KL_A, index, 0, km->key_word_size - 1);
+
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cam_populate(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ km->entry_word[i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = km;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1,
+ km->entry_word[km->be->km.nb_cam_record_words + i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = km;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+
+ return res;
+}
+
+static int cam_reset_entry(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = NULL;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = NULL;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+ return res;
+}
+
+static int move_cuckoo_index(struct km_flow_def_s *km)
+{
+ assert(km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner);
+
+ for (uint32_t bank = 0; bank < km->be->km.nb_cam_banks; bank++) {
+ /* It will not select itself */
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner == NULL) {
+ if (km->cam_paired) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner != NULL)
+ continue;
+ }
+
+ /*
+ * Populate in new position
+ */
+ int res = cam_populate(km, bank);
+
+ if (res) {
+ NT_LOG(DBG, FILTER,
+ "Error: failed to write to KM CAM in cuckoo move");
+ return 0;
+ }
+
+ /*
+ * Reset/free entry in old bank
+ * HW flushes are really not needed, the old addresses are always taken
+ * over by the caller If you change this code in future updates, this may
+ * no longer be true then!
+ */
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = NULL;
+
+ if (km->cam_paired)
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "KM Cuckoo hash moved from bank %i to bank %i (%04X => %04X)",
+ km->bank_used, bank, CAM_KM_DIST_IDX(km->bank_used),
+ CAM_KM_DIST_IDX(bank));
+ km->bank_used = bank;
+ (*km->cuckoo_moves)++;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx, int levels,
+ int cam_adr_list_len)
+{
+ struct km_flow_def_s *km = km_parent->cam_dist[bank_idx].km_owner;
+
+ assert(levels <= CUCKOO_MOVE_MAX_DEPTH);
+
+ /*
+ * Only move if same pairness
+ * Can be extended later to handle both move of paired and single entries
+ */
+ if (!km || km_parent->cam_paired != km->cam_paired)
+ return 0;
+
+ if (move_cuckoo_index(km))
+ return 1;
+
+ if (levels <= 1)
+ return 0;
+
+ assert(cam_adr_list_len < CUCKOO_MOVE_MAX_DEPTH);
+
+ cam_addr_reserved_stack[cam_adr_list_len++] = bank_idx;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ int reserved = 0;
+ int new_idx = CAM_KM_DIST_IDX(i);
+
+ for (int i_reserved = 0; i_reserved < cam_adr_list_len; i_reserved++) {
+ if (cam_addr_reserved_stack[i_reserved] == new_idx) {
+ reserved = 1;
+ break;
+ }
+ }
+
+ if (reserved)
+ continue;
+
+ int res = move_cuckoo_index_level(km, new_idx, levels - 1, cam_adr_list_len);
+
+ if (res) {
+ if (move_cuckoo_index(km))
+ return 1;
+
+ assert(0);
+ }
+ }
+
+ return 0;
+}
+
+static int km_write_data_to_cam(struct km_flow_def_s *km)
+{
+ int res = 0;
+ assert(km->be->km.nb_cam_banks <= MAX_BANKS);
+ assert(km->cam_dist);
+
+ NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
+ km->record_indexes[1], km->record_indexes[2]);
+
+ if (km->info_set)
+ km->entry_word[km->key_word_size] = km->info; /* finally set info */
+
+ int bank = -1;
+
+ /*
+ * first step, see if any of the banks are free
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(i_bank)].km_owner == NULL) {
+ if (km->cam_paired == 0 ||
+ km->cam_dist[CAM_KM_DIST_IDX(i_bank) + 1].km_owner == NULL) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0) {
+ /*
+ * Second step - cuckoo move existing flows if possible
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (move_cuckoo_index_level(km, CAM_KM_DIST_IDX(i_bank), 4, 0)) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0)
+ return -1;
+
+ /* populate CAM */
+ NT_LOG(DBG, FILTER, "KM Bank = %i (addr %04X)", bank, CAM_KM_DIST_IDX(bank));
+ res = cam_populate(km, bank);
+
+ if (res == 0) {
+ km->flushed_to_target = 1;
+ km->bank_used = bank;
+ }
+
+ return res;
+}
+
+/*
+ * TCAM
+ */
+static int tcam_find_free_record(struct km_flow_def_s *km, int start_bank)
+{
+ for (uint32_t rec = 0; rec < km->be->km.nb_tcam_bank_width; rec++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank, rec)].km_owner == NULL) {
+ int pass = 1;
+
+ for (int ii = 1; ii < km->key_word_size; ii++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank + ii, rec)].km_owner !=
+ NULL) {
+ pass = 0;
+ break;
+ }
+ }
+
+ if (pass) {
+ km->tcam_record = rec;
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int tcam_find_mapping(struct km_flow_def_s *km)
+{
+ /* Search record and start index for this flow */
+ for (int bs_idx = 0; bs_idx < km->num_start_offsets; bs_idx++) {
+ if (tcam_find_free_record(km, km->start_offsets[bs_idx])) {
+ km->tcam_start_bank = km->start_offsets[bs_idx];
+ NT_LOG(DBG, FILTER, "Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static int tcam_write_word(struct km_flow_def_s *km, int bank, int record, uint32_t word,
+ uint32_t mask)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ uint8_t a = (uint8_t)((word >> (24 - (byte * 8))) & 0xff);
+ uint8_t a_m = (uint8_t)((mask >> (24 - (byte * 8))) & 0xff);
+ /* calculate important value bits */
+ a = a & a_m;
+
+ for (int val = 0; val < 256; val++) {
+ err |= hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if ((val & a_m) == a)
+ all_recs[rec_val] |= rec_bit;
+ else
+ all_recs[rec_val] &= ~rec_bit;
+
+ err |= hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ /* flush bank */
+ err |= hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+
+ if (err == 0) {
+ assert(km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner == NULL);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = km;
+ }
+
+ return err;
+}
+
+static int km_write_data_to_tcam(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_record < 0) {
+ tcam_find_free_record(km, km->tcam_start_bank);
+
+ if (km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER, "Reused RCP: Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ }
+
+ /* Write KM_TCI */
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record,
+ km->info);
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record,
+ km->flow_type);
+ err |= hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++) {
+ err = tcam_write_word(km, km->tcam_start_bank + i, km->tcam_record,
+ km->entry_word[i], km->entry_mask[i]);
+ }
+
+ if (err == 0)
+ km->flushed_to_target = 1;
+
+ return err;
+}
+
+static int tcam_reset_bank(struct km_flow_def_s *km, int bank, int record)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ for (int val = 0; val < 256; val++) {
+ err = hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+
+ all_recs[rec_val] &= ~rec_bit;
+ err = hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ if (err)
+ return err;
+
+ /* flush bank */
+ err = hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER, "Reset TCAM bank %i, rec_val %i rec bit %08x", bank, rec_val,
+ rec_bit);
+
+ return err;
+}
+
+static int tcam_reset_entry(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_start_bank < 0 || km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ /* Write KM_TCI */
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++)
+ err = tcam_reset_bank(km, km->tcam_start_bank + i, km->tcam_record);
+
+ return err;
+}
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color)
+{
+ int res = -1;
+
+ km->info = color;
+ NT_LOG(DBG, FILTER, "Write Data entry Color: %08x", color);
+
+ switch (km->target) {
+ case KM_CAM:
+ res = km_write_data_to_cam(km);
+ break;
+
+ case KM_TCAM:
+ res = km_write_data_to_tcam(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ break;
+ }
+
+ return res;
+}
+
+int km_clear_data_match_entry(struct km_flow_def_s *km)
+{
+ int res = 0;
+
+ if (km->root) {
+ struct km_flow_def_s *km1 = km->root;
+
+ while (km1->reference != km)
+ km1 = km1->reference;
+
+ km1->reference = km->reference;
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->reference) {
+ km->reference->root = NULL;
+
+ switch (km->target) {
+ case KM_CAM:
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = km->reference;
+
+ if (km->key_word_size + !!km->info_set > 1) {
+ assert(km->cam_paired);
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner =
+ km->reference;
+ }
+
+ break;
+
+ case KM_TCAM:
+ for (int i = 0; i < km->key_word_size; i++) {
+ km->tcam_dist[TCAM_DIST_IDX(km->tcam_start_bank + i,
+ km->tcam_record)]
+ .km_owner = km->reference;
+ }
+
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->flushed_to_target) {
+ switch (km->target) {
+ case KM_CAM:
+ res = cam_reset_entry(km, km->bank_used);
+ break;
+
+ case KM_TCAM:
+ res = tcam_reset_entry(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
index 532884ca01..b8a30671c3 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
@@ -165,6 +165,240 @@ int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
return be->iface->km_rcp_flush(be->be_dev, &be->km, start_idx, count);
}
+static int hw_mod_km_rcp_mod(struct flow_api_backend_s *be, enum hw_km_e field, int index,
+ int word_off, uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->km.nb_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.rcp[index], (uint8_t)*value, sizeof(struct km_v7_rcp_s));
+ break;
+
+ case HW_KM_RCP_QW0_DYN:
+ GET_SET(be->km.v7.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_b, value);
+ break;
+
+ case HW_KM_RCP_QW4_DYN:
+ GET_SET(be->km.v7.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW8_DYN:
+ GET_SET(be->km.v7.rcp[index].dw8_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW8_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw8_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW10_DYN:
+ GET_SET(be->km.v7.rcp[index].dw10_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW10_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw10_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_b, value);
+ break;
+
+ case HW_KM_RCP_SWX_CCH:
+ GET_SET(be->km.v7.rcp[index].swx_cch, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_A:
+ GET_SET(be->km.v7.rcp[index].swx_sel_a, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_B:
+ GET_SET(be->km.v7.rcp[index].swx_sel_b, value);
+ break;
+
+ case HW_KM_RCP_MASK_A:
+ if (word_off > KM_RCP_MASK_D_A_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_d_a[word_off], value);
+ break;
+
+ case HW_KM_RCP_MASK_B:
+ if (word_off > KM_RCP_MASK_B_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_b[word_off], value);
+ break;
+
+ case HW_KM_RCP_DUAL:
+ GET_SET(be->km.v7.rcp[index].dual, value);
+ break;
+
+ case HW_KM_RCP_PAIRED:
+ GET_SET(be->km.v7.rcp[index].paired, value);
+ break;
+
+ case HW_KM_RCP_EL_A:
+ GET_SET(be->km.v7.rcp[index].el_a, value);
+ break;
+
+ case HW_KM_RCP_EL_B:
+ GET_SET(be->km.v7.rcp[index].el_b, value);
+ break;
+
+ case HW_KM_RCP_INFO_A:
+ GET_SET(be->km.v7.rcp[index].info_a, value);
+ break;
+
+ case HW_KM_RCP_INFO_B:
+ GET_SET(be->km.v7.rcp[index].info_b, value);
+ break;
+
+ case HW_KM_RCP_FTM_A:
+ GET_SET(be->km.v7.rcp[index].ftm_a, value);
+ break;
+
+ case HW_KM_RCP_FTM_B:
+ GET_SET(be->km.v7.rcp[index].ftm_b, value);
+ break;
+
+ case HW_KM_RCP_BANK_A:
+ GET_SET(be->km.v7.rcp[index].bank_a, value);
+ break;
+
+ case HW_KM_RCP_BANK_B:
+ GET_SET(be->km.v7.rcp[index].bank_b, value);
+ break;
+
+ case HW_KM_RCP_KL_A:
+ GET_SET(be->km.v7.rcp[index].kl_a, value);
+ break;
+
+ case HW_KM_RCP_KL_B:
+ GET_SET(be->km.v7.rcp[index].kl_b, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_A:
+ GET_SET(be->km.v7.rcp[index].keyway_a, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_B:
+ GET_SET(be->km.v7.rcp[index].keyway_b, value);
+ break;
+
+ case HW_KM_RCP_SYNERGY_MODE:
+ GET_SET(be->km.v7.rcp[index].synergy_mode, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw0_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw0_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw2_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw2_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw4_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw4_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw5_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw5_b_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, &value, 0);
+}
+
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, value, 1);
+}
+
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -180,6 +414,103 @@ int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_cam_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_cam_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ if ((unsigned int)bank >= be->km.nb_cam_banks) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ if ((unsigned int)record >= be->km.nb_cam_records) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ unsigned int index = bank * be->km.nb_cam_records + record;
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_CAM_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.cam[index], (uint8_t)*value, sizeof(struct km_v7_cam_s));
+ break;
+
+ case HW_KM_CAM_W0:
+ GET_SET(be->km.v7.cam[index].w0, value);
+ break;
+
+ case HW_KM_CAM_W1:
+ GET_SET(be->km.v7.cam[index].w1, value);
+ break;
+
+ case HW_KM_CAM_W2:
+ GET_SET(be->km.v7.cam[index].w2, value);
+ break;
+
+ case HW_KM_CAM_W3:
+ GET_SET(be->km.v7.cam[index].w3, value);
+ break;
+
+ case HW_KM_CAM_W4:
+ GET_SET(be->km.v7.cam[index].w4, value);
+ break;
+
+ case HW_KM_CAM_W5:
+ GET_SET(be->km.v7.cam[index].w5, value);
+ break;
+
+ case HW_KM_CAM_FT0:
+ GET_SET(be->km.v7.cam[index].ft0, value);
+ break;
+
+ case HW_KM_CAM_FT1:
+ GET_SET(be->km.v7.cam[index].ft1, value);
+ break;
+
+ case HW_KM_CAM_FT2:
+ GET_SET(be->km.v7.cam[index].ft2, value);
+ break;
+
+ case HW_KM_CAM_FT3:
+ GET_SET(be->km.v7.cam[index].ft3, value);
+ break;
+
+ case HW_KM_CAM_FT4:
+ GET_SET(be->km.v7.cam[index].ft4, value);
+ break;
+
+ case HW_KM_CAM_FT5:
+ GET_SET(be->km.v7.cam[index].ft5, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_cam_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count)
{
if (count == ALL_ENTRIES)
@@ -273,6 +604,12 @@ int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int ba
return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 0);
}
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set)
+{
+ return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 1);
+}
+
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -288,6 +625,49 @@ int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_tci_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_tci_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ unsigned int index = bank * be->km.nb_tcam_bank_width + record;
+
+ if (index >= (be->km.nb_tcam_banks * be->km.nb_tcam_bank_width)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_TCI_COLOR:
+ GET_SET(be->km.v7.tci[index].color, value);
+ break;
+
+ case HW_KM_TCI_FT:
+ GET_SET(be->km.v7.tci[index].ft, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_tci_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5572662647..4737460cdf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -40,7 +40,19 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_km_rcp {
+ struct hw_db_inline_km_rcp_data data;
+ int ref;
+
+ struct hw_db_inline_resource_db_km_ft {
+ struct hw_db_inline_km_ft_data data;
+ int ref;
+ } *ft;
+ } *km;
+
uint32_t nb_cat;
+ uint32_t nb_km_ft;
+ uint32_t nb_km_rcp;
/* Hardware */
@@ -91,6 +103,25 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_km_ft = ndev->be.cat.nb_flow_types;
+ db->nb_km_rcp = ndev->be.km.nb_categories;
+ db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
+
+ if (db->km == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ db->km[i].ft = calloc(db->nb_km_ft * db->nb_cat,
+ sizeof(struct hw_db_inline_resource_db_km_ft));
+
+ if (db->km[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
*db_handle = db;
return 0;
}
@@ -104,6 +135,13 @@ void hw_db_inline_destroy(void *db_handle)
free(db->slc_lr);
free(db->cat);
+ if (db->km) {
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
+ free(db->km[i].ft);
+
+ free(db->km);
+ }
+
free(db->cfn);
free(db);
@@ -134,12 +172,61 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_KM_RCP:
+ hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
+ break;
+
default:
break;
}
}
}
+
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type != type)
+ continue;
+
+ switch (type) {
+ case HW_DB_IDX_TYPE_NONE:
+ return NULL;
+
+ case HW_DB_IDX_TYPE_CAT:
+ return &db->cat[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_QSL:
+ return &db->qsl[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_COT:
+ return &db->cot[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_SLC_LR:
+ return &db->slc_lr[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_KM_RCP:
+ return &db->km[idxs[i].id1].data;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
+ default:
+ return NULL;
+ }
+ }
+
+ return NULL;
+}
+
/******************************************************************************/
/* Filter */
/******************************************************************************/
@@ -614,3 +701,150 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cat[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* KM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_km_compare(const struct hw_db_inline_km_rcp_data *data1,
+ const struct hw_db_inline_km_rcp_data *data2)
+{
+ return data1->rcp == data2->rcp;
+}
+
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_km_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_RCP;
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ if (!found && db->km[i].ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (db->km[i].ref > 0 && hw_db_inline_km_compare(data, &db->km[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->km[idx.id1].data, data, sizeof(struct hw_db_inline_km_rcp_data));
+ db->km[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->km[idx.id1].ref += 1;
+}
+
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ (void)db_handle;
+
+ if (idx.error)
+ return;
+}
+
+/******************************************************************************/
+/* KM FT */
+/******************************************************************************/
+
+static int hw_db_inline_km_ft_compare(const struct hw_db_inline_km_ft_data *data1,
+ const struct hw_db_inline_km_ft_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[data->km.id1];
+ struct hw_db_km_ft idx = { .raw = 0 };
+ uint32_t cat_offset = data->cat.ids * db->nb_cat;
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_FT;
+ idx.id2 = data->km.id1;
+ idx.id3 = data->cat.ids;
+
+ if (km_rcp->data.rcp == 0) {
+ idx.id1 = 0;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_km_ft; ++i) {
+ const struct hw_db_inline_resource_db_km_ft *km_ft = &km_rcp->ft[cat_offset + i];
+
+ if (!found && km_ft->ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (km_ft->ref > 0 && hw_db_inline_km_ft_compare(data, &km_ft->data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&km_rcp->ft[cat_offset + idx.id1].data, data,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error) {
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+ db->km[idx.id2].ft[cat_offset + idx.id1].ref += 1;
+ }
+}
+
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[idx.id2];
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+
+ if (idx.error)
+ return;
+
+ km_rcp->ft[cat_offset + idx.id1].ref -= 1;
+
+ if (km_rcp->ft[cat_offset + idx.id1].ref <= 0) {
+ memset(&km_rcp->ft[cat_offset + idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index d0435acaef..e104ba7327 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_action_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cot_idx {
HW_DB_IDX;
};
@@ -48,12 +52,22 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_km_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_km_ft {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_KM_FT,
};
/* Functionality data types */
@@ -123,6 +137,16 @@ struct hw_db_inline_action_set_data {
};
};
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -130,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle);
void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
uint32_t size);
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
@@ -158,6 +184,18 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
/**/
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data);
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data);
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+
+/**/
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 999b1ed985..78d662d70c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2334,6 +2334,23 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ const bool empty_pattern = fd_has_empty_pattern(fd);
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
+ local_idxs[(*local_idx_counter)++] = cot_idx.raw;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Finalize QSL */
struct hw_db_qsl_idx qsl_idx =
hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
@@ -2428,6 +2445,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ int identical_km_entry_ft = -1;
+
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -2502,6 +2521,130 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ /* Setup KM RCP */
+ struct hw_db_inline_km_rcp_data km_rcp_data = { .rcp = 0 };
+
+ if (fd->km.num_ftype_elem) {
+ struct flow_handle *flow = dev->ndev->flow_base, *found_flow = NULL;
+
+ if (km_key_create(&fd->km, fh->port_id)) {
+ NT_LOG(ERR, FILTER, "KM creation failed");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.be = &dev->ndev->be;
+
+ /* Look for existing KM RCPs */
+ while (flow) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW &&
+ flow->fd->km.flow_type) {
+ int res = km_key_compare(&fd->km, &flow->fd->km);
+
+ if (res < 0) {
+ /* Flow rcp and match data is identical */
+ identical_km_entry_ft = flow->fd->km.flow_type;
+ found_flow = flow;
+ break;
+ }
+
+ if (res > 0) {
+ /* Flow rcp found and match data is different */
+ found_flow = flow;
+ }
+ }
+
+ flow = flow->next;
+ }
+
+ km_attach_ndev_resource_management(&fd->km, &dev->ndev->km_res_handle);
+
+ if (found_flow != NULL) {
+ /* Reuse existing KM RCP */
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)
+ found_flow->flm_db_idxs,
+ found_flow->flm_db_idx_counter);
+
+ if (other_km_rcp_data == NULL ||
+ flow_nic_ref_resource(dev->ndev, RES_KM_CATEGORY,
+ other_km_rcp_data->rcp)) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference existing KM RCP resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_data.rcp = other_km_rcp_data->rcp;
+ } else {
+ /* Alloc new KM RCP */
+ int rcp = flow_nic_alloc_resource(dev->ndev, RES_KM_CATEGORY, 1);
+
+ if (rcp < 0) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference KM RCP resource (flow_nic_alloc)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_set(&fd->km, rcp);
+ km_rcp_data.rcp = (uint32_t)rcp;
+ }
+ }
+
+ struct hw_db_km_idx km_idx =
+ hw_db_inline_km_add(dev->ndev, dev->ndev->hw_db_handle, &km_rcp_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = km_idx.raw;
+
+ if (km_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM RCP resource (db_inline)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Setup KM FT */
+ struct hw_db_inline_km_ft_data km_ft_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ };
+ struct hw_db_km_ft km_ft_idx =
+ hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = km_ft_idx.raw;
+
+ if (km_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Finalize KM RCP */
+ if (fd->km.num_ftype_elem) {
+ if (identical_km_entry_ft >= 0 && identical_km_entry_ft != km_ft_idx.id1) {
+ NT_LOG(ERR, FILTER,
+ "Identical KM matches cannot have different KM FTs");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.flow_type = km_ft_idx.id1;
+
+ if (fd->km.target == KM_CAM) {
+ uint32_t ft_a_mask = 0;
+ hw_mod_km_rcp_get(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0, &ft_a_mask);
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0,
+ ft_a_mask | (1 << fd->km.flow_type));
+ }
+
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)km_rcp_data.rcp, 1);
+
+ km_write_data_match_entry(&fd->km, 0);
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2782,6 +2925,25 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
} else {
NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->fd->km.num_ftype_elem) {
+ km_clear_data_match_entry(&fh->fd->km);
+
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ if (other_km_rcp_data != NULL &&
+ flow_nic_deref_resource(dev->ndev, RES_KM_CATEGORY,
+ (int)other_km_rcp_data->rcp) == 0) {
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_PRESET_ALL,
+ (int)other_km_rcp_data->rcp, 0, 0);
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)other_km_rcp_data->rcp,
+ 1);
+ }
+ }
+
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 31/86] net/ntnic: add hash API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (29 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 30/86] net/ntnic: add KM module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 32/86] net/ntnic: add TPE module Serhii Iliushyk
` (55 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Hasher module calculates a configurable hash value
to be used internally by the FPGA.
The module support both Toeplitz and NT-hash.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 40 +
drivers/net/ntnic/include/flow_api_engine.h | 17 +
drivers/net/ntnic/include/hw_mod_backend.h | 20 +
.../ntnic/include/stream_binary_flow_api.h | 25 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 212 +++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 ++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 25 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 ++++
.../profile_inline/flow_api_hw_db_inline.c | 142 +++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 850 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 4 +
drivers/net/ntnic/ntnic_mod_reg.h | 4 +
15 files changed, 1706 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index edffd0a57a..2e96fa5bed 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -29,6 +29,37 @@ struct hw_mod_resource_s {
*/
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev);
+/**
+ * A structure used to configure the Receive Side Scaling (RSS) feature
+ * of an Ethernet port.
+ */
+struct nt_eth_rss_conf {
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t rss_key[MAX_RSS_KEY_LEN];
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
+ /**
+ * Hash algorithm.
+ */
+ enum rte_eth_hash_function algorithm;
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask);
+
struct flow_eth_dev {
/* NIC that owns this port device */
struct flow_nic_dev *ndev;
@@ -49,6 +80,11 @@ struct flow_eth_dev {
struct flow_eth_dev *next;
};
+enum flow_nic_hash_e {
+ HASH_ALGO_ROUND_ROBIN = 0,
+ HASH_ALGO_5TUPLE,
+};
+
/* registered NIC backends */
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
@@ -191,4 +227,8 @@ void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm);
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index a0f02f4e8a..e52363f04e 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,7 @@ struct km_flow_def_s {
int bank_used;
uint32_t *cuckoo_moves; /* for CAM statistics only */
struct cam_distrib_s *cam_dist;
+ struct hasher_s *hsh;
/* TCAM specific bank management */
struct tcam_distrib_s *tcam_dist;
@@ -136,6 +137,17 @@ struct km_flow_def_s {
int tcam_record;
};
+/*
+ * RSS configuration, see struct rte_flow_action_rss
+ */
+struct hsh_def_s {
+ enum rte_eth_hash_function func; /* RSS hash function to apply */
+ /* RSS hash types, see definition of RTE_ETH_RSS_* for hash calculation options */
+ uint64_t types;
+ uint32_t key_len; /* Hash key length in bytes. */
+ const uint8_t *key; /* Hash key. */
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -247,6 +259,11 @@ struct nic_flow_def {
* Key Matcher flow definitions
*/
struct km_flow_def_s km;
+
+ /*
+ * Hash module RSS definitions
+ */
+ struct hsh_def_s hsh;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 26903f2183..cee148807a 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -149,14 +149,27 @@ enum km_flm_if_select_e {
int debug
enum frame_offs_e {
+ DYN_SOF = 0,
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
+ DYN_MPLS = 3,
DYN_L3 = 4,
+ DYN_ID_IPV4_6 = 5,
+ DYN_FINAL_IP_DST = 6,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
+ DYN_TUN_PAYLOAD = 9,
+ DYN_TUN_L2 = 10,
+ DYN_TUN_VLAN = 11,
+ DYN_TUN_MPLS = 12,
DYN_TUN_L3 = 13,
+ DYN_TUN_ID_IPV4_6 = 14,
+ DYN_TUN_FINAL_IP_DST = 15,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ DYN_EOF = 18,
+ DYN_L3_PAYLOAD_END = 19,
+ DYN_TUN_L3_PAYLOAD_END = 20,
SB_VNI = SWX_INFO | 1,
SB_MAC_PORT = SWX_INFO | 2,
SB_KCC_ID = SWX_INFO | 3
@@ -227,6 +240,11 @@ enum {
};
+enum {
+ HASH_HASH_NONE = 0,
+ HASH_5TUPLE = 8,
+};
+
enum {
CPY_SELECT_DSCP_IPV4 = 0,
CPY_SELECT_DSCP_IPV6 = 1,
@@ -670,6 +688,8 @@ int hw_mod_hsh_alloc(struct flow_api_backend_s *be);
void hw_mod_hsh_free(struct flow_api_backend_s *be);
int hw_mod_hsh_reset(struct flow_api_backend_s *be);
int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value);
struct qsl_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 8097518d61..e5fe686d99 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,6 +12,31 @@
/* Max RSS hash key length in bytes */
#define MAX_RSS_KEY_LEN 40
+/* NT specific MASKs for RSS configuration */
+/* NOTE: Masks are required for correct RSS configuration, do not modify them! */
+#define NT_ETH_RSS_IPV4_MASK \
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define NT_ETH_RSS_IPV6_MASK \
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NT_ETH_RSS_IP_MASK \
+ (NT_ETH_RSS_IPV4_MASK | NT_ETH_RSS_IPV6_MASK | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+/* List of all RSS flags supported for RSS calculation offload */
+#define NT_ETH_RSS_OFFLOAD_MASK \
+ (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM | RTE_ETH_RSS_PORT | RTE_ETH_RSS_GTPU)
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e1fef37ccb..d7e6d05556 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -56,6 +56,7 @@ sources = files(
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
+ 'nthw/flow_api/flow_hasher.c',
'nthw/flow_api/flow_kcc.c',
'nthw/flow_api/flow_km.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a51d621ef9..043e4244fc 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,8 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "ntlog.h"
+#include "nt_util.h"
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
@@ -12,6 +14,11 @@
#define SCATTER_GATHER
+#define RSS_TO_STRING(name) \
+ { \
+ name, #name \
+ }
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -807,6 +814,211 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
return ndev->be.be_dev;
}
+/* Information for a given RSS type. */
+struct rss_type_info {
+ uint64_t rss_type;
+ const char *str;
+};
+
+static struct rss_type_info rss_to_string[] = {
+ /* RTE_BIT64(2) IPv4 dst + IPv4 src */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4),
+ /* RTE_BIT64(3) IPv4 dst + IPv4 src + Identification of group of fragments */
+ RSS_TO_STRING(RTE_ETH_RSS_FRAG_IPV4),
+ /* RTE_BIT64(4) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_TCP),
+ /* RTE_BIT64(5) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_UDP),
+ /* RTE_BIT64(6) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_SCTP),
+ /* RTE_BIT64(7) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_OTHER),
+ /*
+ * RTE_BIT64(14) 128-bits of L2 payload starting after src MAC, i.e. including optional
+ * VLAN tag and ethertype. Overrides all L3 and L4 flags at the same level, but inner
+ * L2 payload can be combined with outer S-VLAN and GTPU TEID flags.
+ */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_PAYLOAD),
+ /* RTE_BIT64(18) L4 dst + L4 src + L4 protocol - see comment of RTE_ETH_RSS_L4_CHKSUM */
+ RSS_TO_STRING(RTE_ETH_RSS_PORT),
+ /* RTE_BIT64(19) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_VXLAN),
+ /* RTE_BIT64(20) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_GENEVE),
+ /* RTE_BIT64(21) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_NVGRE),
+ /* RTE_BIT64(23) GTP TEID - always from outer GTPU header */
+ RSS_TO_STRING(RTE_ETH_RSS_GTPU),
+ /* RTE_BIT64(24) MAC dst + MAC src */
+ RSS_TO_STRING(RTE_ETH_RSS_ETH),
+ /* RTE_BIT64(25) outermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_S_VLAN),
+ /* RTE_BIT64(26) innermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_C_VLAN),
+ /* RTE_BIT64(27) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ESP),
+ /* RTE_BIT64(28) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_AH),
+ /* RTE_BIT64(29) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV3),
+ /* RTE_BIT64(30) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PFCP),
+ /* RTE_BIT64(31) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PPPOE),
+ /* RTE_BIT64(32) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ECPRI),
+ /* RTE_BIT64(33) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_MPLS),
+ /* RTE_BIT64(34) IPv4 Header checksum + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4_CHKSUM),
+
+ /*
+ * if combined with RTE_ETH_RSS_NONFRAG_IPV4_[TCP|UDP|SCTP] then
+ * L4 protocol + chosen protocol header Checksum
+ * else
+ * error
+ */
+ /* RTE_BIT64(35) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_CHKSUM),
+#ifndef ANDROMEDA_DPDK_21_11
+ /* RTE_BIT64(36) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV2),
+#endif
+
+ { RTE_BIT64(37), "unknown_RTE_BIT64(37)" },
+ { RTE_BIT64(38), "unknown_RTE_BIT64(38)" },
+ { RTE_BIT64(39), "unknown_RTE_BIT64(39)" },
+ { RTE_BIT64(40), "unknown_RTE_BIT64(40)" },
+ { RTE_BIT64(41), "unknown_RTE_BIT64(41)" },
+ { RTE_BIT64(42), "unknown_RTE_BIT64(42)" },
+ { RTE_BIT64(43), "unknown_RTE_BIT64(43)" },
+ { RTE_BIT64(44), "unknown_RTE_BIT64(44)" },
+ { RTE_BIT64(45), "unknown_RTE_BIT64(45)" },
+ { RTE_BIT64(46), "unknown_RTE_BIT64(46)" },
+ { RTE_BIT64(47), "unknown_RTE_BIT64(47)" },
+ { RTE_BIT64(48), "unknown_RTE_BIT64(48)" },
+ { RTE_BIT64(49), "unknown_RTE_BIT64(49)" },
+
+ /* RTE_BIT64(50) outermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_OUTERMOST),
+ /* RTE_BIT64(51) innermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_INNERMOST),
+
+ /* RTE_BIT64(52) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE96),
+ /* RTE_BIT64(53) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE64),
+ /* RTE_BIT64(54) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE56),
+ /* RTE_BIT64(55) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE48),
+ /* RTE_BIT64(56) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE40),
+ /* RTE_BIT64(57) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE32),
+
+ /* RTE_BIT64(58) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_DST_ONLY),
+ /* RTE_BIT64(59) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_SRC_ONLY),
+ /* RTE_BIT64(60) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_DST_ONLY),
+ /* RTE_BIT64(61) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_SRC_ONLY),
+ /* RTE_BIT64(62) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_DST_ONLY),
+ /* RTE_BIT64(63) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_SRC_ONLY),
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask)
+{
+ if (str == NULL || str_len == 0)
+ return -1;
+
+ memset(str, 0x0, str_len);
+ uint16_t str_end = 0;
+ const struct rss_type_info *start = rss_to_string;
+
+ for (const struct rss_type_info *p = start; p != start + ARRAY_SIZE(rss_to_string); ++p) {
+ if (p->rss_type & hash_mask) {
+ if (strlen(prefix) + strlen(p->str) < (size_t)(str_len - str_end)) {
+ snprintf(str + str_end, str_len - str_end, "%s", prefix);
+ str_end += strlen(prefix);
+ snprintf(str + str_end, str_len - str_end, "%s", p->str);
+ str_end += strlen(p->str);
+
+ } else {
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Hash
+ */
+
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm)
+{
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ switch (algorithm) {
+ case HASH_ALGO_5TUPLE:
+ /* need to create an IPv6 hashing and enable the adaptive ip mask bit */
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_OFS, hsh_idx, 0, -16);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_PE, hsh_idx, 0, DYN_L4);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_PE, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_P, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 1, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 2, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 3, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 4, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 5, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 6, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 7, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 8, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 9, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_VALID, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_TYPE, hsh_idx, 0, HASH_5TUPLE);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0, 1);
+
+ NT_LOG(DBG, FILTER, "Set IPv6 5-tuple hasher with adaptive IPv4 hashing");
+ break;
+
+ default:
+ case HASH_ALGO_ROUND_ROBIN:
+ /* zero is round-robin */
+ break;
+ }
+
+ return 0;
+}
+
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.c b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
new file mode 100644
index 0000000000..86dfc16e79
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
@@ -0,0 +1,156 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <math.h>
+
+#include "flow_hasher.h"
+
+static uint32_t shuffle(uint32_t x)
+{
+ return ((x & 0x00000002) << 29) | ((x & 0xAAAAAAA8) >> 3) | ((x & 0x15555555) << 3) |
+ ((x & 0x40000000) >> 29);
+}
+
+static uint32_t ror_inv(uint32_t x, const int s)
+{
+ return (x >> s) | ((~x) << (32 - s));
+}
+
+static uint32_t combine(uint32_t x, uint32_t y)
+{
+ uint32_t x1 = ror_inv(x, 15);
+ uint32_t x2 = ror_inv(x, 13);
+ uint32_t y1 = ror_inv(y, 3);
+ uint32_t y2 = ror_inv(y, 27);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint32_t mix(uint32_t x, uint32_t y)
+{
+ return shuffle(combine(x, y));
+}
+
+static uint64_t ror_inv3(uint64_t x)
+{
+ const uint64_t m = 0xE0000000E0000000ULL;
+
+ return ((x >> 3) | m) ^ ((x << 29) & m);
+}
+
+static uint64_t ror_inv13(uint64_t x)
+{
+ const uint64_t m = 0xFFF80000FFF80000ULL;
+
+ return ((x >> 13) | m) ^ ((x << 19) & m);
+}
+
+static uint64_t ror_inv15(uint64_t x)
+{
+ const uint64_t m = 0xFFFE0000FFFE0000ULL;
+
+ return ((x >> 15) | m) ^ ((x << 17) & m);
+}
+
+static uint64_t ror_inv27(uint64_t x)
+{
+ const uint64_t m = 0xFFFFFFE0FFFFFFE0ULL;
+
+ return ((x >> 27) | m) ^ ((x << 5) & m);
+}
+
+static uint64_t shuffle64(uint64_t x)
+{
+ return ((x & 0x0000000200000002) << 29) | ((x & 0xAAAAAAA8AAAAAAA8) >> 3) |
+ ((x & 0x1555555515555555) << 3) | ((x & 0x4000000040000000) >> 29);
+}
+
+static uint64_t pair(uint32_t x, uint32_t y)
+{
+ return ((uint64_t)x << 32) | y;
+}
+
+static uint64_t combine64(uint64_t x, uint64_t y)
+{
+ uint64_t x1 = ror_inv15(x);
+ uint64_t x2 = ror_inv13(x);
+ uint64_t y1 = ror_inv3(y);
+ uint64_t y2 = ror_inv27(y);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint64_t mix64(uint64_t x, uint64_t y)
+{
+ return shuffle64(combine64(x, y));
+}
+
+static uint32_t calc16(const uint32_t key[16])
+{
+ /*
+ * 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Layer 0
+ * \./ \./ \./ \./ \./ \./ \./ \./
+ * 0 1 2 3 4 5 6 7 Layer 1
+ * \__.__/ \__.__/ \__.__/ \__.__/
+ * 0 1 2 3 Layer 2
+ * \______.______/ \______.______/
+ * 0 1 Layer 3
+ * \______________.______________/
+ * 0 Layer 4
+ * / \
+ * \./
+ * 0 Layer 5
+ * / \
+ * \./ Layer 6
+ * value
+ */
+
+ uint64_t z;
+ uint32_t x;
+
+ z = mix64(mix64(mix64(pair(key[0], key[8]), pair(key[1], key[9])),
+ mix64(pair(key[2], key[10]), pair(key[3], key[11]))),
+ mix64(mix64(pair(key[4], key[12]), pair(key[5], key[13])),
+ mix64(pair(key[6], key[14]), pair(key[7], key[15]))));
+
+ x = mix((uint32_t)(z >> 32), (uint32_t)z);
+ x = mix(x, ror_inv(x, 17));
+ x = combine(x, ror_inv(x, 17));
+
+ return x;
+}
+
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result)
+{
+ uint64_t val;
+ uint32_t res;
+
+ val = calc16(key);
+ res = (uint32_t)val;
+
+ if (hsh->cam_bw > 32)
+ val = (val << (hsh->cam_bw - 32)) ^ val;
+
+ for (int i = 0; i < hsh->banks; i++) {
+ result[i] = (unsigned int)(val & hsh->cam_records_bw_mask);
+ val = val >> hsh->cam_records_bw;
+ }
+
+ return res;
+}
+
+int init_hasher(struct hasher_s *hsh, int banks, int nb_records)
+{
+ hsh->banks = banks;
+ hsh->cam_records_bw = (int)(log2(nb_records - 1) + 1);
+ hsh->cam_records_bw_mask = (1U << hsh->cam_records_bw) - 1;
+ hsh->cam_bw = hsh->banks * hsh->cam_records_bw;
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.h b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
new file mode 100644
index 0000000000..15de8e9933
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_HASHER_H_
+#define _FLOW_HASHER_H_
+
+#include <stdint.h>
+
+struct hasher_s {
+ int banks;
+ int cam_records_bw;
+ uint32_t cam_records_bw_mask;
+ int cam_bw;
+};
+
+int init_hasher(struct hasher_s *hsh, int _banks, int nb_records);
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result);
+
+#endif /* _FLOW_HASHER_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 30d6ea728e..f79919cb81 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -9,6 +9,7 @@
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
#include "nt_util.h"
+#include "flow_hasher.h"
#define MAX_QWORDS 2
#define MAX_SWORDS 2
@@ -75,10 +76,25 @@ static int tcam_find_mapping(struct km_flow_def_s *km);
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
{
+ /*
+ * KM entries occupied in CAM - to manage the cuckoo shuffling
+ * and manage CAM population and usage
+ * KM entries occupied in TCAM - to manage population and usage
+ */
+ if (!*handle) {
+ *handle = calloc(1,
+ (size_t)CAM_ENTRIES + sizeof(uint32_t) + (size_t)TCAM_ENTRIES +
+ sizeof(struct hasher_s));
+ NT_LOG(DBG, FILTER, "Allocate NIC DEV CAM and TCAM record manager");
+ }
+
km->cam_dist = (struct cam_distrib_s *)*handle;
km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
km->tcam_dist =
(struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+
+ km->hsh = (struct hasher_s *)((char *)km->tcam_dist + TCAM_ENTRIES);
+ init_hasher(km->hsh, km->be->km.nb_cam_banks, km->be->km.nb_cam_records);
}
void km_free_ndev_resource_management(void **handle)
@@ -839,9 +855,18 @@ static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx
static int km_write_data_to_cam(struct km_flow_def_s *km)
{
int res = 0;
+ int val[MAX_BANKS];
assert(km->be->km.nb_cam_banks <= MAX_BANKS);
assert(km->cam_dist);
+ /* word list without info set */
+ gethash(km->hsh, km->entry_word, val);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ /* if paired we start always on an even address - reset bit 0 */
+ km->record_indexes[i] = (km->cam_paired) ? val[i] & ~1 : val[i];
+ }
+
NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
km->record_indexes[1], km->record_indexes[2]);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
index df5c00ac42..1750d09afb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
@@ -89,3 +89,182 @@ int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->hsh_rcp_flush(be->be_dev, &be->hsh, start_idx, count);
}
+
+static int hw_mod_hsh_rcp_mod(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t *value, int get)
+{
+ if (index >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 5:
+ switch (field) {
+ case HW_HSH_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->hsh.v5.rcp[index], (uint8_t)*value,
+ sizeof(struct hsh_v5_rcp_s));
+ break;
+
+ case HW_HSH_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off);
+ break;
+
+ case HW_HSH_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off,
+ be->hsh.nb_rcp);
+ break;
+
+ case HW_HSH_RCP_LOAD_DIST_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].load_dist_type, value);
+ break;
+
+ case HW_HSH_RCP_MAC_PORT_MASK:
+ if (word_off > HSH_RCP_MAC_PORT_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].mac_port_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SORT:
+ GET_SET(be->hsh.v5.rcp[index].sort, value);
+ break;
+
+ case HW_HSH_RCP_QW0_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw0_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_HSH_RCP_QW4_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw4_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_PE:
+ GET_SET(be->hsh.v5.rcp[index].w8_pe, value);
+ break;
+
+ case HW_HSH_RCP_W8_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w8_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w8_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_PE:
+ GET_SET(be->hsh.v5.rcp[index].w9_pe, value);
+ break;
+
+ case HW_HSH_RCP_W9_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w9_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W9_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w9_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_P:
+ GET_SET(be->hsh.v5.rcp[index].w9_p, value);
+ break;
+
+ case HW_HSH_RCP_P_MASK:
+ GET_SET(be->hsh.v5.rcp[index].p_mask, value);
+ break;
+
+ case HW_HSH_RCP_WORD_MASK:
+ if (word_off > HSH_RCP_WORD_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].word_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SEED:
+ GET_SET(be->hsh.v5.rcp[index].seed, value);
+ break;
+
+ case HW_HSH_RCP_TNL_P:
+ GET_SET(be->hsh.v5.rcp[index].tnl_p, value);
+ break;
+
+ case HW_HSH_RCP_HSH_VALID:
+ GET_SET(be->hsh.v5.rcp[index].hsh_valid, value);
+ break;
+
+ case HW_HSH_RCP_HSH_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].hsh_type, value);
+ break;
+
+ case HW_HSH_RCP_TOEPLITZ:
+ GET_SET(be->hsh.v5.rcp[index].toeplitz, value);
+ break;
+
+ case HW_HSH_RCP_K:
+ if (word_off > HSH_RCP_KEY_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].k[word_off], value);
+ break;
+
+ case HW_HSH_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->hsh.v5.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 5 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value)
+{
+ return hw_mod_hsh_rcp_mod(be, field, index, word_off, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4737460cdf..068c890b45 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,9 +30,15 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_hsh {
+ struct hw_db_inline_hsh_data data;
+ int ref;
+ } *hsh;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_hsh;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -122,6 +128,21 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
}
+ db->cfn = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cfn));
+
+ if (db->cfn == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_hsh = ndev->be.hsh.nb_rcp;
+ db->hsh = calloc(db->nb_hsh, sizeof(struct hw_db_inline_resource_db_hsh));
+
+ if (db->hsh == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -133,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->hsh);
+
free(db->cat);
if (db->km) {
@@ -180,6 +203,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_HSH:
+ hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -219,6 +246,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_KM_FT:
return NULL; /* FTs can't be easily looked up */
+ case HW_DB_IDX_TYPE_HSH:
+ return &db->hsh[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -247,6 +277,7 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
{
(void)ft;
(void)qsl_hw_id;
+ (void)ft;
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
@@ -848,3 +879,114 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* HSH */
+/******************************************************************************/
+
+static int hw_db_inline_hsh_compare(const struct hw_db_inline_hsh_data *data1,
+ const struct hw_db_inline_hsh_data *data2)
+{
+ for (uint32_t i = 0; i < MAX_RSS_KEY_LEN; ++i)
+ if (data1->key[i] != data2->key[i])
+ return 0;
+
+ return data1->func == data2->func && data1->hash_mask == data2->hash_mask;
+}
+
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_hsh_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_HSH;
+
+ /* check if default hash configuration shall be used, i.e. rss_hf is not set */
+ /*
+ * NOTE: hsh id 0 is reserved for "default"
+ * HSH used by port configuration; All ports share the same default hash settings.
+ */
+ if (data->hash_mask == 0) {
+ idx.ids = 0;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_hsh; ++i) {
+ int ref = db->hsh[i].ref;
+
+ if (ref > 0 && hw_db_inline_hsh_compare(data, &db->hsh[i].data)) {
+ idx.ids = i;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ struct nt_eth_rss_conf tmp_rss_conf;
+
+ tmp_rss_conf.rss_hf = data->hash_mask;
+ memcpy(tmp_rss_conf.rss_key, data->key, MAX_RSS_KEY_LEN);
+ tmp_rss_conf.algorithm = data->func;
+ int res = flow_nic_set_hasher_fields(ndev, idx.ids, tmp_rss_conf);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->hsh[idx.ids].ref = 1;
+ memcpy(&db->hsh[idx.ids].data, data, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, idx.ids);
+
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->hsh[idx.ids].ref += 1;
+}
+
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->hsh[idx.ids].ref -= 1;
+
+ if (db->hsh[idx.ids].ref <= 0) {
+ /*
+ * NOTE: hsh id 0 is reserved for "default" HSH used by
+ * port configuration, so we shall keep it even if
+ * it is not used by any flow
+ */
+ if (idx.ids > 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, idx.ids, 0, 0x0);
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->hsh[idx.ids].data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_free_resource(ndev, RES_HSH_RCP, idx.ids);
+ }
+
+ db->hsh[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index e104ba7327..c97bdef1b7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -60,6 +60,10 @@ struct hw_db_km_ft {
HW_DB_IDX;
};
+struct hw_db_hsh_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
@@ -68,6 +72,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_SLC_LR,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
+ HW_DB_IDX_TYPE_HSH,
};
/* Functionality data types */
@@ -133,6 +138,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_hsh_idx hsh;
};
};
};
@@ -175,6 +181,11 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data);
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 78d662d70c..f6482941d6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -25,6 +25,15 @@
#define NT_VIOLATING_MBR_CFN 0
#define NT_VIOLATING_MBR_QSL 1
+#define RTE_ETH_RSS_UDP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)
+
+#define RTE_ETH_RSS_TCP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX)
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2322,10 +2331,27 @@ static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_d
}
}
+static void setup_db_hsh_data(struct nic_flow_def *fd, struct hw_db_inline_hsh_data *hsh_data)
+{
+ memset(hsh_data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+
+ hsh_data->func = fd->hsh.func;
+ hsh_data->hash_mask = fd->hsh.types;
+
+ if (fd->hsh.key != NULL) {
+ /*
+ * Just a safeguard. Check and error handling of rss_key_len
+ * shall be done at api layers above.
+ */
+ memcpy(&hsh_data->key, fd->hsh.key,
+ fd->hsh.key_len < MAX_RSS_KEY_LEN ? fd->hsh.key_len : MAX_RSS_KEY_LEN);
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
- const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data,
uint32_t group __rte_unused,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
@@ -2362,6 +2388,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle, hsh_data);
+ local_idxs[(*local_idx_counter)++] = hsh_idx.raw;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2405,6 +2442,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
if (attr->group > 0 && fd_has_empty_pattern(fd)) {
/*
@@ -2488,6 +2526,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle,
+ &hsh_data);
+ fh->db_idxs[fh->db_idx_counter++] = hsh_idx.raw;
+ action_set_data.hsh = hsh_idx;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2667,6 +2718,126 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
return NULL;
}
+/*
+ * Public functions
+ */
+
+/*
+ * FPGA uses up to 10 32-bit words (320 bits) for hash calculation + 8 bits for L4 protocol number.
+ * Hashed data are split between two 128-bit Quad Words (QW)
+ * and two 32-bit Words (W), which can refer to different header parts.
+ */
+enum hsh_words_id {
+ HSH_WORDS_QW0 = 0,
+ HSH_WORDS_QW4,
+ HSH_WORDS_W8,
+ HSH_WORDS_W9,
+ HSH_WORDS_SIZE,
+};
+
+/* struct with details about hash QWs & Ws */
+struct hsh_words {
+ /*
+ * index of W (word) or index of 1st word of QW (quad word)
+ * is used for hash mask calculation
+ */
+ uint8_t index;
+ uint8_t toeplitz_index; /* offset in Bytes of given [Q]W inside Toeplitz RSS key */
+ enum hw_hsh_e pe; /* offset to header part, e.g. beginning of L4 */
+ enum hw_hsh_e ofs; /* relative offset in BYTES to 'pe' header offset above */
+ uint16_t bit_len; /* max length of header part in bits to fit into QW/W */
+ bool free; /* only free words can be used for hsh calculation */
+};
+
+static enum hsh_words_id get_free_word(struct hsh_words *words, uint16_t bit_len)
+{
+ enum hsh_words_id ret = HSH_WORDS_SIZE;
+ uint16_t ret_bit_len = UINT16_MAX;
+
+ for (enum hsh_words_id i = HSH_WORDS_QW0; i < HSH_WORDS_SIZE; i++) {
+ if (words[i].free && bit_len <= words[i].bit_len &&
+ words[i].bit_len < ret_bit_len) {
+ ret = i;
+ ret_bit_len = words[i].bit_len;
+ }
+ }
+
+ return ret;
+}
+
+static int flow_nic_set_hasher_part_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct hsh_words *words, uint32_t pe, uint32_t ofs,
+ int bit_len, bool toeplitz)
+{
+ int res = 0;
+
+ /* check if there is any free word, which can accommodate header part of given 'bit_len' */
+ enum hsh_words_id word = get_free_word(words, bit_len);
+
+ if (word == HSH_WORDS_SIZE) {
+ NT_LOG(ERR, FILTER, "Cannot add additional %d bits into hash", bit_len);
+ return -1;
+ }
+
+ words[word].free = false;
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].pe, hsh_idx, 0, pe);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].pe,
+ hsh_idx, pe);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].ofs, hsh_idx, 0, ofs);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].ofs,
+ hsh_idx, ofs);
+
+ /* set HW_HSH_RCP_WORD_MASK based on used QW/W and given 'bit_len' */
+ int mask_bit_len = bit_len;
+ uint32_t mask = 0x0;
+ uint32_t mask_be = 0x0;
+ uint32_t toeplitz_mask[9] = { 0x0 };
+ /* iterate through all words of QW */
+ uint16_t words_count = words[word].bit_len / 32;
+
+ for (uint16_t mask_off = 1; mask_off <= words_count; mask_off++) {
+ if (mask_bit_len >= 32) {
+ mask_bit_len -= 32;
+ mask = 0xffffffff;
+ mask_be = mask;
+
+ } else if (mask_bit_len > 0) {
+ /* keep bits from left to right, i.e. little to big endian */
+ mask_be = 0xffffffff >> (32 - mask_bit_len);
+ mask = mask_be << (32 - mask_bit_len);
+ mask_bit_len = 0;
+
+ } else {
+ mask = 0x0;
+ mask_be = 0x0;
+ }
+
+ /* reorder QW words mask from little to big endian */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx,
+ words[word].index + words_count - mask_off, mask);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, words[word].index + words_count - mask_off, mask);
+ toeplitz_mask[words[word].toeplitz_index + mask_off - 1] = mask_be;
+ }
+
+ if (toeplitz) {
+ NT_LOG(DBG, FILTER,
+ "Partial Toeplitz RSS key mask: %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 "",
+ toeplitz_mask[8], toeplitz_mask[7], toeplitz_mask[6], toeplitz_mask[5],
+ toeplitz_mask[4], toeplitz_mask[3], toeplitz_mask[2], toeplitz_mask[1],
+ toeplitz_mask[0]);
+ NT_LOG(DBG, FILTER,
+ " MSB LSB");
+ }
+
+ return res;
+}
+
/*
* Public functions
*/
@@ -2717,6 +2888,12 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+ /* Set default hasher recipe to 5-tuple */
+ flow_nic_set_hasher(ndev, 0, HASH_ALGO_5TUPLE);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -2783,6 +2960,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, 0, 0, 0);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
@@ -2980,6 +3161,672 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
+{
+ return (hash_mask & hash_bits) == hash_bits;
+}
+
+static __rte_always_inline void unset_bits(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ *hash_mask &= ~hash_bits;
+}
+
+static __rte_always_inline void unset_bits_and_log(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", *hash_mask & hash_bits) == 0)
+ NT_LOG(DBG, FILTER, "Configured RSS types:%s", rss_buffer);
+
+ unset_bits(hash_mask, hash_bits);
+}
+
+static __rte_always_inline void unset_bits_if_all_enabled(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ if (all_bits_enabled(*hash_mask, hash_bits))
+ unset_bits(hash_mask, hash_bits);
+}
+
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ uint64_t fields = rss_conf.rss_hf;
+
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", fields) == 0)
+ NT_LOG(DBG, FILTER, "Requested RSS types:%s", rss_buffer);
+
+ /*
+ * configure all (Q)Words usable for hash calculation
+ * Hash can be calculated from 4 independent header parts:
+ * | QW0 | Qw4 | W8| W9|
+ * word | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
+ */
+ struct hsh_words words[HSH_WORDS_SIZE] = {
+ { 0, 5, HW_HSH_RCP_QW0_PE, HW_HSH_RCP_QW0_OFS, 128, true },
+ { 4, 1, HW_HSH_RCP_QW4_PE, HW_HSH_RCP_QW4_OFS, 128, true },
+ { 8, 0, HW_HSH_RCP_W8_PE, HW_HSH_RCP_W8_OFS, 32, true },
+ {
+ 9, 255, HW_HSH_RCP_W9_PE, HW_HSH_RCP_W9_OFS, 32,
+ true
+ }, /* not supported for Toeplitz */
+ };
+
+ int res = 0;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+ /* enable hashing */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+
+ /* configure selected hash function and its key */
+ bool toeplitz = false;
+
+ switch (rss_conf.algorithm) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /* Use default NTH10 hashing algorithm */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 0);
+ /* Use 1st 32-bits from rss_key to configure NTH10 SEED */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0,
+ rss_conf.rss_key[0] << 24 | rss_conf.rss_key[1] << 16 |
+ rss_conf.rss_key[2] << 8 | rss_conf.rss_key[3]);
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ toeplitz = true;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 1);
+ uint8_t empty_key = 0;
+
+ /* Toeplitz key (always 40B) must be encoded from little to big endian */
+ for (uint8_t i = 0; i <= (MAX_RSS_KEY_LEN - 8); i += 8) {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 |
+ rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 |
+ rss_conf.rss_key[i + 7]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 | rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 | rss_conf.rss_key[i + 7]);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 |
+ rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 |
+ rss_conf.rss_key[i + 3]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 | rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 | rss_conf.rss_key[i + 3]);
+ empty_key |= rss_conf.rss_key[i] | rss_conf.rss_key[i + 1] |
+ rss_conf.rss_key[i + 2] | rss_conf.rss_key[i + 3] |
+ rss_conf.rss_key[i + 4] | rss_conf.rss_key[i + 5] |
+ rss_conf.rss_key[i + 6] | rss_conf.rss_key[i + 7];
+ }
+
+ if (empty_key == 0) {
+ NT_LOG(ERR, FILTER,
+ "Toeplitz key must be configured. Key with all bytes set to zero is not allowed.");
+ return -1;
+ }
+
+ words[HSH_WORDS_W9].free = false;
+ NT_LOG(DBG, FILTER,
+ "Toeplitz hashing is enabled thus W9 and P_MASK cannot be used.");
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Unknown hashing function %d requested", rss_conf.algorithm);
+ return -1;
+ }
+
+ /* indication that some IPv6 flag is present */
+ bool ipv6 = fields & (NT_ETH_RSS_IPV6_MASK);
+ /* store proto mask for later use at IP and L4 checksum handling */
+ uint64_t l4_proto_mask = fields &
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX);
+
+ /* outermost headers are used by default, so innermost bit takes precedence if detected */
+ bool outer = (fields & RTE_ETH_RSS_LEVEL_INNERMOST) ? false : true;
+ unset_bits(&fields, RTE_ETH_RSS_LEVEL_MASK);
+
+ if (fields == 0) {
+ NT_LOG(ERR, FILTER, "RSS hash configuration 0x%" PRIX64 " is not valid.",
+ rss_conf.rss_hf);
+ return -1;
+ }
+
+ /* indication that IPv4 `protocol` or IPv6 `next header` fields shall be part of the hash
+ */
+ bool l4_proto_hash = false;
+
+ /*
+ * check if SRC_ONLY & DST_ONLY are used simultaneously;
+ * According to DPDK, we shall behave like none of these bits is set
+ */
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ /* L2 */
+ if (fields & (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 6, 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 96, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 6,
+ 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 96, toeplitz);
+ }
+
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY |
+ RTE_ETH_RSS_L2_DST_ONLY);
+ }
+
+ /*
+ * VLAN support of multiple VLAN headers,
+ * where S-VLAN is the first and C-VLAN the last VLAN header
+ */
+ if (fields & RTE_ETH_RSS_C_VLAN) {
+ /*
+ * use MPLS protocol offset, which points just after ethertype with relative
+ * offset -6 (i.e. 2 bytes
+ * of ethertype & size + 4 bytes of VLAN header field) to access last vlan header
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, -6,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ -6, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_C_VLAN);
+ }
+
+ if (fields & RTE_ETH_RSS_S_VLAN) {
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_FIRST_VLAN, 0, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_VLAN,
+ 0, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_S_VLAN);
+ }
+ /* L2 payload */
+ /* calculate hash of 128-bits of l2 payload; Use MPLS protocol offset to address the
+ * beginning of L2 payload even if MPLS header is not present
+ */
+ if (fields & RTE_ETH_RSS_L2_PAYLOAD) {
+ uint64_t outer_fields_enabled = 0;
+
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ 0, 128, toeplitz);
+ outer_fields_enabled = fields & RTE_ETH_RSS_GTPU;
+ }
+
+ /*
+ * L2 PAYLOAD hashing overrides all L3 & L4 RSS flags.
+ * Thus we can clear all remaining (supported)
+ * RSS flags...
+ */
+ unset_bits_and_log(&fields, NT_ETH_RSS_OFFLOAD_MASK);
+ /*
+ * ...but in case of INNER L2 PAYLOAD we must process
+ * "always outer" GTPU field if enabled
+ */
+ fields |= outer_fields_enabled;
+ }
+
+ /* L3 + L4 protocol number */
+ if (fields & RTE_ETH_RSS_IPV4_CHKSUM) {
+ /* only IPv4 checksum is supported by DPDK RTE_ETH_RSS_* types */
+ if (ipv6) {
+ NT_LOG(ERR, FILTER,
+ "RSS: IPv4 checksum requested with IPv6 header hashing!");
+ res = 1;
+
+ } else if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L3, 10,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L3,
+ 10, 16, toeplitz);
+ }
+
+ /*
+ * L3 checksum is made from whole L3 header, i.e. no need to process other
+ * L3 hashing flags
+ */
+ unset_bits_and_log(&fields, RTE_ETH_RSS_IPV4_CHKSUM | NT_ETH_RSS_IP_MASK);
+ }
+
+ if (fields & NT_ETH_RSS_IP_MASK) {
+ if (ipv6) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & (RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6)) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 32, toeplitz);
+ }
+ }
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0,
+ 1);
+
+ } else {
+ /* IPv4 */
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 32, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 16,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 64, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 32,
+ toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 16, 32,
+ toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 64,
+ toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & RTE_ETH_RSS_FRAG_IPV4) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 16, toeplitz);
+ }
+ }
+ }
+
+ /* check if L4 protocol type shall be part of hash */
+ if (l4_proto_mask)
+ l4_proto_hash = true;
+
+ unset_bits_and_log(&fields, NT_ETH_RSS_IP_MASK);
+ }
+
+ /* L4 */
+ if (fields & (RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 2, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 32, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 2,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 32, toeplitz);
+ }
+
+ l4_proto_hash = true;
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY);
+ }
+
+ /* IPv4 protocol / IPv6 next header fields */
+ if (l4_proto_hash) {
+ /* NOTE: HW_HSH_RCP_P_MASK is not supported for Toeplitz and thus one of SW0, SW4
+ * or W8 must be used to hash on `protocol` field of IPv4 or `next header` field of
+ * IPv6 header.
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 6, 8,
+ toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 9, 8,
+ toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 0);
+ }
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 6, 8, toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 9, 8, toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 1);
+ }
+ }
+
+ l4_proto_hash = false;
+ }
+
+ /*
+ * GTPU - for UPF use cases we always use TEID from outermost GTPU header
+ * even if other headers are innermost
+ */
+ if (fields & RTE_ETH_RSS_GTPU) {
+ NT_LOG(DBG, FILTER, "Set outer GTPU TEID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L4_PAYLOAD, 4, 32,
+ toeplitz);
+ unset_bits_and_log(&fields, RTE_ETH_RSS_GTPU);
+ }
+
+ /* Checksums */
+ /* only UDP, TCP and SCTP checksums are supported */
+ if (fields & RTE_ETH_RSS_L4_CHKSUM) {
+ switch (l4_proto_mask) {
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_UDP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 6, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 6, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_TCP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 16, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 16, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 8, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 8, 32,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+
+ /* none or unsupported protocol was chosen */
+ case 0:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing is supported only for UDP, TCP and SCTP protocols");
+ res = -1;
+ break;
+
+ /* multiple L4 protocols were selected */
+ default:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing can be enabled just for one of UDP, TCP or SCTP protocols");
+ res = -1;
+ break;
+ }
+ }
+
+ if (fields || res != 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", rss_conf.rss_hf) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration%s is not supported for hash func %s.",
+ rss_buffer,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration 0x%" PRIX64
+ " is not supported for hash func %s.",
+ rss_conf.rss_hf,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+ }
+
+ return -1;
+ }
+
+ return res;
+}
+
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -2993,6 +3840,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b87f8542ac..e623bb2352 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,4 +38,8 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 149c549112..1069be2f85 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -252,6 +252,10 @@ struct profile_inline_ops {
int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 32/86] net/ntnic: add TPE module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (30 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 31/86] net/ntnic: add hash API Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 33/86] net/ntnic: add FLM module Serhii Iliushyk
` (54 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The TX Packet Editor is a software abstraction module,
that keeps track of the handful of FPGA modules
that are used to edit packets in the TX pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 16 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 373 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 70 ++
.../profile_inline/flow_api_profile_inline.c | 127 ++-
5 files changed, 1342 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index cee148807a..e16dcd478f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -889,24 +889,40 @@ void hw_mod_tpe_free(struct flow_api_backend_s *be);
int hw_mod_tpe_reset(struct flow_api_backend_s *be);
int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value);
int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
enum debug_mode_e {
FLOW_BACKEND_DEBUG_MODE_NONE = 0x0000,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index 0d73b795d5..ba8f2d0dbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -169,6 +169,82 @@ int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpp_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpp_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpp_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPP_RCP_EXP:
+ GET_SET(be->tpe.v3.rpp_rcp[index].exp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* IFR_RCP
*/
@@ -203,6 +279,90 @@ int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ins_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ins_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.ins_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_ins_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_INS_RCP_DYN:
+ GET_SET(be->tpe.v3.ins_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_INS_RCP_OFS:
+ GET_SET(be->tpe.v3.ins_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_INS_RCP_LEN:
+ GET_SET(be->tpe.v3.ins_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ins_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RCP
*/
@@ -220,6 +380,102 @@ int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v3_rpl_v4_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RCP_DYN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_RPL_RCP_OFS:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_RPL_RCP_LEN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].len, value);
+ break;
+
+ case HW_TPE_RPL_RCP_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_RCP_EXT_PRIO:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ext_prio, value);
+ break;
+
+ case HW_TPE_RPL_RCP_ETH_TYPE_WR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].eth_type_wr, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_EXT
*/
@@ -237,6 +493,86 @@ int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_ext_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_ext_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_ext[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_ext_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value, be->tpe.nb_rpl_ext_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_EXT_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_ext[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_EXT_META_RPL_LEN:
+ GET_SET(be->tpe.v3.rpl_ext[index].meta_rpl_len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_ext_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RPL
*/
@@ -254,6 +590,89 @@ int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rpl_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rpl_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rpl[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_rpl_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value, be->tpe.nb_rpl_depth);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RPL_VALUE:
+ if (get)
+ memcpy(value, be->tpe.v3.rpl_rpl[index].value,
+ sizeof(uint32_t) * 4);
+
+ else
+ memcpy(be->tpe.v3.rpl_rpl[index].value, value,
+ sizeof(uint32_t) * 4);
+
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_tpe_rpl_rpl_mod(be, field, index, value, 0);
+}
+
/*
* CPY_RCP
*/
@@ -273,6 +692,96 @@ int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_cpy_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_cpy_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ const uint32_t cpy_size = be->tpe.nb_cpy_writers * be->tpe.nb_rcp_categories;
+
+ if (index >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.cpy_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_cpy_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value, cpy_size);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CPY_RCP_READER_SELECT:
+ GET_SET(be->tpe.v3.cpy_rcp[index].reader_select, value);
+ break;
+
+ case HW_TPE_CPY_RCP_DYN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_CPY_RCP_OFS:
+ GET_SET(be->tpe.v3.cpy_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_CPY_RCP_LEN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_cpy_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* HFU_RCP
*/
@@ -290,6 +799,166 @@ int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_hfu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_hfu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.hfu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_hfu_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_outer_l4_len, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_hfu_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* CSU_RCP
*/
@@ -306,3 +975,91 @@ int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_csu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+
+static int hw_mod_tpe_csu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.csu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_csu_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol4_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il4_cmd, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_csu_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 068c890b45..dec96fce85 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,6 +30,17 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_tpe {
+ struct hw_db_inline_tpe_data data;
+ int ref;
+ } *tpe;
+
+ struct hw_db_inline_resource_db_tpe_ext {
+ struct hw_db_inline_tpe_ext_data data;
+ int replace_ram_idx;
+ int ref;
+ } *tpe_ext;
+
struct hw_db_inline_resource_db_hsh {
struct hw_db_inline_hsh_data data;
int ref;
@@ -38,6 +49,8 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_tpe;
+ uint32_t nb_tpe_ext;
uint32_t nb_hsh;
/* Items */
@@ -101,6 +114,22 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_tpe = ndev->be.tpe.nb_rcp_categories;
+ db->tpe = calloc(db->nb_tpe, sizeof(struct hw_db_inline_resource_db_tpe));
+
+ if (db->tpe == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_tpe_ext = ndev->be.tpe.nb_rpl_ext_categories;
+ db->tpe_ext = calloc(db->nb_tpe_ext, sizeof(struct hw_db_inline_resource_db_tpe_ext));
+
+ if (db->tpe_ext == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -154,6 +183,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->tpe);
+ free(db->tpe_ext);
free(db->hsh);
free(db->cat);
@@ -195,6 +226,15 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_TPE:
+ hw_db_inline_tpe_deref(ndev, db_handle, *(struct hw_db_tpe_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ hw_db_inline_tpe_ext_deref(ndev, db_handle,
+ *(struct hw_db_tpe_ext_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -240,6 +280,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_SLC_LR:
return &db->slc_lr[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_TPE:
+ return &db->tpe[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ return &db->tpe_ext[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -652,6 +698,333 @@ void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
}
}
+/******************************************************************************/
+/* TPE */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_compare(const struct hw_db_inline_tpe_data *data1,
+ const struct hw_db_inline_tpe_data *data2)
+{
+ for (int i = 0; i < 6; ++i)
+ if (data1->writer[i].en != data2->writer[i].en ||
+ data1->writer[i].reader_select != data2->writer[i].reader_select ||
+ data1->writer[i].dyn != data2->writer[i].dyn ||
+ data1->writer[i].ofs != data2->writer[i].ofs ||
+ data1->writer[i].len != data2->writer[i].len)
+ return 0;
+
+ return data1->insert_len == data2->insert_len && data1->new_outer == data2->new_outer &&
+ data1->calc_eth_type_from_inner_ip == data2->calc_eth_type_from_inner_ip &&
+ data1->ttl_en == data2->ttl_en && data1->ttl_dyn == data2->ttl_dyn &&
+ data1->ttl_ofs == data2->ttl_ofs && data1->len_a_en == data2->len_a_en &&
+ data1->len_a_pos_dyn == data2->len_a_pos_dyn &&
+ data1->len_a_pos_ofs == data2->len_a_pos_ofs &&
+ data1->len_a_add_dyn == data2->len_a_add_dyn &&
+ data1->len_a_add_ofs == data2->len_a_add_ofs &&
+ data1->len_a_sub_dyn == data2->len_a_sub_dyn &&
+ data1->len_b_en == data2->len_b_en &&
+ data1->len_b_pos_dyn == data2->len_b_pos_dyn &&
+ data1->len_b_pos_ofs == data2->len_b_pos_ofs &&
+ data1->len_b_add_dyn == data2->len_b_add_dyn &&
+ data1->len_b_add_ofs == data2->len_b_add_ofs &&
+ data1->len_b_sub_dyn == data2->len_b_sub_dyn &&
+ data1->len_c_en == data2->len_c_en &&
+ data1->len_c_pos_dyn == data2->len_c_pos_dyn &&
+ data1->len_c_pos_ofs == data2->len_c_pos_ofs &&
+ data1->len_c_add_dyn == data2->len_c_add_dyn &&
+ data1->len_c_add_ofs == data2->len_c_add_ofs &&
+ data1->len_c_sub_dyn == data2->len_c_sub_dyn;
+}
+
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE;
+
+ for (uint32_t i = 1; i < db->nb_tpe; ++i) {
+ int ref = db->tpe[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_compare(data, &db->tpe[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe[idx.ids].ref = 1;
+ memcpy(&db->tpe[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_data));
+
+ if (data->insert_len > 0) {
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_RPP_RCP_EXP, idx.ids, data->insert_len);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_RPL_PTR, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_EXT_PRIO, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_ETH_TYPE_WR, idx.ids,
+ data->calc_eth_type_from_inner_ip);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+ }
+
+ for (uint32_t i = 0; i < 6; ++i) {
+ if (data->writer[i].en) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i,
+ data->writer[i].reader_select);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, data->writer[i].dyn);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, data->writer[i].ofs);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, data->writer[i].len);
+
+ } else {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, 0);
+ }
+
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_WR, idx.ids, data->len_a_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN, idx.ids,
+ data->new_outer);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_DYN, idx.ids,
+ data->len_a_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_OFS, idx.ids,
+ data->len_a_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_DYN, idx.ids,
+ data->len_a_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_OFS, idx.ids,
+ data->len_a_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_SUB_DYN, idx.ids,
+ data->len_a_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_WR, idx.ids, data->len_b_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_DYN, idx.ids,
+ data->len_b_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_OFS, idx.ids,
+ data->len_b_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_DYN, idx.ids,
+ data->len_b_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_OFS, idx.ids,
+ data->len_b_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_SUB_DYN, idx.ids,
+ data->len_b_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_WR, idx.ids, data->len_c_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_DYN, idx.ids,
+ data->len_c_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_OFS, idx.ids,
+ data->len_c_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_DYN, idx.ids,
+ data->len_c_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_OFS, idx.ids,
+ data->len_c_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_SUB_DYN, idx.ids,
+ data->len_c_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_WR, idx.ids, data->ttl_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_DYN, idx.ids, data->ttl_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_OFS, idx.ids, data->ttl_ofs);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe[idx.ids].ref -= 1;
+
+ if (db->tpe[idx.ids].ref <= 0) {
+ for (uint32_t i = 0; i < 6; ++i) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_PRESET_ALL,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->tpe[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_data));
+ db->tpe[idx.ids].ref = 0;
+ }
+}
+
+/******************************************************************************/
+/* TPE_EXT */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_ext_compare(const struct hw_db_inline_tpe_ext_data *data1,
+ const struct hw_db_inline_tpe_ext_data *data2)
+{
+ return data1->size == data2->size &&
+ memcmp(data1->hdr8, data2->hdr8, HW_DB_INLINE_MAX_ENCAP_SIZE) == 0;
+}
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_ext_idx idx = { .raw = 0 };
+ int rpl_rpl_length = ((int)data->size + 15) / 16;
+ int found = 0, rpl_rpl_index = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE_EXT;
+
+ if (data->size > HW_DB_INLINE_MAX_ENCAP_SIZE) {
+ idx.error = 1;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_tpe_ext; ++i) {
+ int ref = db->tpe_ext[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_ext_compare(data, &db->tpe_ext[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ext_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ rpl_rpl_index = flow_nic_alloc_resource_config(ndev, RES_TPE_RPL, rpl_rpl_length, 1);
+
+ if (rpl_rpl_index < 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe_ext[idx.ids].ref = 1;
+ db->tpe_ext[idx.ids].replace_ram_idx = rpl_rpl_index;
+ memcpy(&db->tpe_ext[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_ext_data));
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_RPL_PTR, idx.ids, rpl_rpl_index);
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_META_RPL_LEN, idx.ids, data->size);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_data[4];
+ memcpy(rpl_data, data->hdr32 + i * 4, sizeof(rpl_data));
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_data);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe_ext[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe_ext[idx.ids].ref -= 1;
+
+ if (db->tpe_ext[idx.ids].ref <= 0) {
+ const int rpl_rpl_length = ((int)db->tpe_ext[idx.ids].data.size + 15) / 16;
+ const int rpl_rpl_index = db->tpe_ext[idx.ids].replace_ram_idx;
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_zero[] = { 0, 0, 0, 0 };
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_zero);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, rpl_rpl_index + i);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ memset(&db->tpe_ext[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_ext_data));
+ db->tpe_ext[idx.ids].ref = 0;
+ }
+}
+
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c97bdef1b7..18d959307e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -52,6 +52,60 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_inline_tpe_data {
+ uint32_t insert_len : 16;
+ uint32_t new_outer : 1;
+ uint32_t calc_eth_type_from_inner_ip : 1;
+ uint32_t ttl_en : 1;
+ uint32_t ttl_dyn : 5;
+ uint32_t ttl_ofs : 8;
+
+ struct {
+ uint32_t en : 1;
+ uint32_t reader_select : 3;
+ uint32_t dyn : 5;
+ uint32_t ofs : 14;
+ uint32_t len : 5;
+ uint32_t padding : 4;
+ } writer[6];
+
+ uint32_t len_a_en : 1;
+ uint32_t len_a_pos_dyn : 5;
+ uint32_t len_a_pos_ofs : 8;
+ uint32_t len_a_add_dyn : 5;
+ uint32_t len_a_add_ofs : 8;
+ uint32_t len_a_sub_dyn : 5;
+
+ uint32_t len_b_en : 1;
+ uint32_t len_b_pos_dyn : 5;
+ uint32_t len_b_pos_ofs : 8;
+ uint32_t len_b_add_dyn : 5;
+ uint32_t len_b_add_ofs : 8;
+ uint32_t len_b_sub_dyn : 5;
+
+ uint32_t len_c_en : 1;
+ uint32_t len_c_pos_dyn : 5;
+ uint32_t len_c_pos_ofs : 8;
+ uint32_t len_c_add_dyn : 5;
+ uint32_t len_c_add_ofs : 8;
+ uint32_t len_c_sub_dyn : 5;
+};
+
+struct hw_db_inline_tpe_ext_data {
+ uint32_t size;
+ union {
+ uint8_t hdr8[HW_DB_INLINE_MAX_ENCAP_SIZE];
+ uint32_t hdr32[(HW_DB_INLINE_MAX_ENCAP_SIZE + 3) / 4];
+ };
+};
+
+struct hw_db_tpe_idx {
+ HW_DB_IDX;
+};
+struct hw_db_tpe_ext_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -70,6 +124,9 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_TPE,
+ HW_DB_IDX_TYPE_TPE_EXT,
+
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
@@ -138,6 +195,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
};
@@ -181,6 +239,18 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data);
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data);
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+
struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_hsh_data *data);
void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f6482941d6..85a8a4fc0e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -18,6 +18,8 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
@@ -2419,6 +2421,92 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
}
}
+ /* Setup TPE EXT */
+ if (fd->tun_hdr.len > 0) {
+ assert(fd->tun_hdr.len <= HW_DB_INLINE_MAX_ENCAP_SIZE);
+
+ struct hw_db_inline_tpe_ext_data tpe_ext_data = {
+ .size = fd->tun_hdr.len,
+ };
+
+ memset(tpe_ext_data.hdr8, 0x0, HW_DB_INLINE_MAX_ENCAP_SIZE);
+ memcpy(tpe_ext_data.hdr8, fd->tun_hdr.d.hdr8, (fd->tun_hdr.len + 15) & ~15);
+
+ struct hw_db_tpe_ext_idx tpe_ext_idx =
+ hw_db_inline_tpe_ext_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_ext_data);
+ local_idxs[(*local_idx_counter)++] = tpe_ext_idx.raw;
+
+ if (tpe_ext_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE EXT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_rpl_ext_ptr)
+ *flm_rpl_ext_ptr = tpe_ext_idx.ids;
+ }
+
+ /* Setup TPE */
+ assert(fd->modify_field_count <= 6);
+
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip =
+ !fd->tun_hdr.new_outer && fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ tpe_data.writer[i].en = 1;
+ tpe_data.writer[i].reader_select = fd->modify_field[i].select;
+ tpe_data.writer[i].dyn = fd->modify_field[i].dyn;
+ tpe_data.writer[i].ofs = fd->modify_field[i].ofs;
+ tpe_data.writer[i].len = fd->modify_field[i].len;
+ }
+
+ if (fd->tun_hdr.new_outer) {
+ const int fcs_length = 4;
+
+ /* L4 length */
+ tpe_data.len_a_en = 1;
+ tpe_data.len_a_pos_dyn = DYN_L4;
+ tpe_data.len_a_pos_ofs = 4;
+ tpe_data.len_a_add_dyn = 18;
+ tpe_data.len_a_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_a_sub_dyn = DYN_L4;
+
+ /* L3 length */
+ tpe_data.len_b_en = 1;
+ tpe_data.len_b_pos_dyn = DYN_L3;
+ tpe_data.len_b_pos_ofs = fd->tun_hdr.ip_version == 4 ? 2 : 4;
+ tpe_data.len_b_add_dyn = 18;
+ tpe_data.len_b_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_b_sub_dyn = DYN_L3;
+
+ /* GTP length */
+ tpe_data.len_c_en = 1;
+ tpe_data.len_c_pos_dyn = DYN_L4_PAYLOAD;
+ tpe_data.len_c_pos_ofs = 2;
+ tpe_data.len_c_add_dyn = 18;
+ tpe_data.len_c_add_ofs = (uint32_t)(-8 - fcs_length) & 0xff;
+ tpe_data.len_c_sub_dyn = DYN_L4_PAYLOAD;
+ }
+
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle, &tpe_data);
+
+ local_idxs[(*local_idx_counter)++] = tpe_idx.raw;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
return 0;
}
@@ -2539,6 +2627,30 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup TPE */
+ if (fd->ttl_sub_enable) {
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip = !fd->tun_hdr.new_outer &&
+ fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_data);
+ fh->db_idxs[fh->db_idx_counter++] = tpe_idx.raw;
+ action_set_data.tpe = tpe_idx;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
}
/* Setup CAT */
@@ -2847,6 +2959,16 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (!ndev->flow_mgnt_prepared) {
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* KM Flow Type 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_mark_resource_used(ndev, RES_KM_CATEGORY, 0);
+
+ /* Reserved FLM Flow Types */
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_MISS_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_UNHANDLED_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_RCP, 0);
/* COT is locked to CFN. Don't set color for CFN 0 */
hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
@@ -2872,8 +2994,11 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
- /* SLC LR index 0 is reserved */
+ /* SLC LR & TPE index 0 were reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_EXT, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RPL, 0);
/* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 33/86] net/ntnic: add FLM module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (31 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 32/86] net/ntnic: add TPE module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 34/86] net/ntnic: add flm rcp module Serhii Iliushyk
` (53 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 42 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 190 +++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 257 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 33 +++
.../profile_inline/flow_api_profile_inline.c | 224 ++++++++++++++-
.../flow_api_profile_inline_config.h | 58 ++++
drivers/net/ntnic/ntutil/nt_util.h | 8 +
8 files changed, 1042 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index e16dcd478f..de662c4ed1 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -367,6 +367,18 @@ int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
/* KCE/KCS/FTE KM */
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -374,6 +386,18 @@ int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
enum km_flm_if_select_e if_num, int index, uint32_t *value);
/* KCE/KCS/FTE FLM */
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -384,10 +408,14 @@ int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
@@ -638,7 +666,21 @@ int hw_mod_flm_reset(struct flow_api_backend_s *be);
int hw_mod_flm_control_flush(struct flow_api_backend_s *be);
int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+int hw_mod_flm_status_update(struct flow_api_backend_s *be);
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index 9164ec1ae0..985c821312 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -902,6 +902,95 @@ static int hw_mod_cat_kce_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kce_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kce_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v18.kce[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v21.kce[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* KCS
*/
@@ -925,6 +1014,95 @@ static int hw_mod_cat_kcs_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kcs_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kcs_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v18.kcs[index].category, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v21.kcs[index].category[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* FTE
*/
@@ -1094,6 +1272,12 @@ int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cte_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -1154,6 +1338,12 @@ int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cts_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 8c1f3f2d96..f5eaea7c4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -293,11 +293,268 @@ int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, u
return hw_mod_flm_control_mod(be, field, &value, 0);
}
+int hw_mod_flm_status_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_status_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_status_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STATUS_CALIB_SUCCESS:
+ GET_SET(be->flm.v25.status->calib_success, value);
+ break;
+
+ case HW_FLM_STATUS_CALIB_FAIL:
+ GET_SET(be->flm.v25.status->calib_fail, value);
+ break;
+
+ case HW_FLM_STATUS_INITDONE:
+ GET_SET(be->flm.v25.status->initdone, value);
+ break;
+
+ case HW_FLM_STATUS_IDLE:
+ GET_SET(be->flm.v25.status->idle, value);
+ break;
+
+ case HW_FLM_STATUS_CRITICAL:
+ GET_SET(be->flm.v25.status->critical, value);
+ break;
+
+ case HW_FLM_STATUS_PANIC:
+ GET_SET(be->flm.v25.status->panic, value);
+ break;
+
+ case HW_FLM_STATUS_CRCERR:
+ GET_SET(be->flm.v25.status->crcerr, value);
+ break;
+
+ case HW_FLM_STATUS_EFT_BP:
+ GET_SET(be->flm.v25.status->eft_bp, value);
+ break;
+
+ case HW_FLM_STATUS_CACHE_BUFFER_CRITICAL:
+ GET_SET(be->flm.v25.status->cache_buf_critical, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_status_mod(be, field, value, 1);
+}
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be)
{
return be->iface->flm_scan_flush(be->be_dev, &be->flm);
}
+static int hw_mod_flm_scan_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCAN_I:
+ GET_SET(be->flm.v25.scan->i, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_scan_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_load_bin_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_load_bin_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_LOAD_BIN:
+ GET_SET(be->flm.v25.load_bin->bin, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_load_bin_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_prio_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_prio_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PRIO_LIMIT0:
+ GET_SET(be->flm.v25.prio->limit0, value);
+ break;
+
+ case HW_FLM_PRIO_FT0:
+ GET_SET(be->flm.v25.prio->ft0, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT1:
+ GET_SET(be->flm.v25.prio->limit1, value);
+ break;
+
+ case HW_FLM_PRIO_FT1:
+ GET_SET(be->flm.v25.prio->ft1, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT2:
+ GET_SET(be->flm.v25.prio->limit2, value);
+ break;
+
+ case HW_FLM_PRIO_FT2:
+ GET_SET(be->flm.v25.prio->ft2, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT3:
+ GET_SET(be->flm.v25.prio->limit3, value);
+ break;
+
+ case HW_FLM_PRIO_FT3:
+ GET_SET(be->flm.v25.prio->ft3, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_prio_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count)
+{
+ if (count == ALL_ENTRIES)
+ count = be->flm.nb_pst_profiles;
+
+ if ((unsigned int)(start_idx + count) > be->flm.nb_pst_profiles) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ return be->iface->flm_pst_flush(be->be_dev, &be->flm, start_idx, count);
+}
+
+static int hw_mod_flm_pst_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.pst[index], (uint8_t)*value,
+ sizeof(struct flm_v25_pst_s));
+ break;
+
+ case HW_FLM_PST_BP:
+ GET_SET(be->flm.v25.pst[index].bp, value);
+ break;
+
+ case HW_FLM_PST_PP:
+ GET_SET(be->flm.v25.pst[index].pp, value);
+ break;
+
+ case HW_FLM_PST_TP:
+ GET_SET(be->flm.v25.pst[index].tp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_pst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index dec96fce85..61492090ce 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,14 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_FT_LOOKUP_KEY_A 0
+
+#define HW_DB_FT_TYPE_KM 1
+#define HW_DB_FT_LOOKUP_KEY_A 0
+#define HW_DB_FT_LOOKUP_KEY_C 2
+
+#define HW_DB_FT_TYPE_FLM 0
+#define HW_DB_FT_TYPE_KM 1
/******************************************************************************/
/* Handle */
/******************************************************************************/
@@ -59,6 +67,23 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_resource_db_flm_ft {
+ struct hw_db_inline_flm_ft_data data;
+ struct hw_db_flm_ft idx;
+ int ref;
+ } *ft;
+
+ struct hw_db_inline_resource_db_flm_match_set {
+ struct hw_db_match_set_idx idx;
+ int ref;
+ } *match_set;
+
+ struct hw_db_inline_resource_db_flm_cfn_map {
+ int cfn_idx;
+ } *cfn_map;
+ } *flm;
+
struct hw_db_inline_resource_db_km_rcp {
struct hw_db_inline_km_rcp_data data;
int ref;
@@ -70,6 +95,7 @@ struct hw_db_inline_resource_db {
} *km;
uint32_t nb_cat;
+ uint32_t nb_flm_ft;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -173,6 +199,13 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
*db_handle = db;
+
+ /* Preset data */
+
+ db->flm[0].ft[1].idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ db->flm[0].ft[1].idx.id1 = 1;
+ db->flm[0].ft[1].ref = 1;
+
return 0;
}
@@ -235,6 +268,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ hw_db_inline_flm_ft_deref(ndev, db_handle,
+ *(struct hw_db_flm_ft *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -286,6 +324,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -307,6 +348,61 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
/* Filter */
/******************************************************************************/
+/*
+ * lookup refers to key A/B/C/D, and can have values 0, 1, 2, and 3.
+ */
+static void hw_db_set_ft(struct flow_nic_dev *ndev, int type, int cfn_index, int lookup,
+ int flow_type, int enable)
+{
+ (void)type;
+ (void)enable;
+
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index = (8 * flow_type + cfn_index / cat_funcs) * max_lookups + lookup;
+ int fte_field = cfn_index % cat_funcs;
+
+ uint32_t current_bm = 0;
+ uint32_t fte_field_bm = 1 << fte_field;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t final_bm = enable ? (fte_field_bm | current_bm) : (~fte_field_bm & current_bm);
+
+ if (current_bm != final_bm) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
/*
* Setup a filter to match:
* All packets in CFN checks
@@ -348,6 +444,17 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
return -1;
+ /* KM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match FT=ft_argument for look-up C */
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft, 1);
+
/* Make all CFN checks TRUE */
if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
return -1;
@@ -1252,6 +1359,133 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+/******************************************************************************/
+/* FLM FT */
+/******************************************************************************/
+
+static int hw_db_inline_flm_ft_compare(const struct hw_db_inline_flm_ft_data *data1,
+ const struct hw_db_inline_flm_ft_data *data2)
+{
+ return data1->is_group_zero == data2->is_group_zero && data1->jump == data2->jump &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ if (data->is_group_zero) {
+ idx.error = 1;
+ return idx;
+ }
+
+ if (flm_rcp->ft[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->group];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ /* RCP 0 always uses FT 1; i.e. use unhandled FT for disabled RCP */
+ if (data->group == 0) {
+ idx.id1 = 1;
+ return idx;
+ }
+
+ if (data->is_group_zero) {
+ idx.id3 = 1;
+ return idx;
+ }
+
+ /* FLM_FT records 0, 1 and last (15) are reserved */
+ /* NOTE: RES_FLM_FLOW_TYPE resource is global and it cannot be used in _add() and _deref()
+ * to track usage of FLM_FT recipes which are group specific.
+ */
+ for (uint32_t i = 2; i < db->nb_flm_ft; ++i) {
+ if (!found && flm_rcp->ft[i].ref <= 0 &&
+ !flow_nic_is_resource_used(ndev, RES_FLM_FLOW_TYPE, i)) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (flm_rcp->ft[i].ref > 0 &&
+ hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error && idx.id3 == 0)
+ db->flm[idx.id2].ft[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+
+ if (idx.error || idx.id2 == 0 || idx.id3 > 0)
+ return;
+
+ flm_rcp = &db->flm[idx.id2];
+
+ flm_rcp->ft[idx.id1].ref -= 1;
+
+ if (flm_rcp->ft[idx.id1].ref > 0)
+ return;
+
+ flm_rcp->ft[idx.id1].ref = 0;
+ memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
+}
/******************************************************************************/
/* HSH */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 18d959307e..a520ae1769 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_match_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_action_set_idx {
HW_DB_IDX;
};
@@ -106,6 +110,13 @@ struct hw_db_tpe_ext_idx {
HW_DB_IDX;
};
+struct hw_db_flm_idx {
+ HW_DB_IDX;
+};
+struct hw_db_flm_ft {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -128,6 +139,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE_EXT,
HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -211,6 +223,17 @@ struct hw_db_inline_km_ft_data {
struct hw_db_action_set_idx action_set;
};
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -277,6 +300,16 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx);
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_ft idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 85a8a4fc0e..85adcbb2d9 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -11,6 +11,7 @@
#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
#include "stream_binary_flow_api.h"
@@ -47,6 +48,128 @@ static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
return -1;
}
+/*
+ * Flow Matcher functionality
+ */
+
+static int flm_sdram_calibrate(struct flow_nic_dev *ndev)
+{
+ int success = 0;
+ uint32_t fail_value = 0;
+ uint32_t value = 0;
+
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_PRESET_ALL, 0x0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_SPLIT_SDRAM_USAGE, 0x10);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for ddr4 calibration/init done */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_SUCCESS, &value);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_FAIL, &fail_value);
+
+ if (value & 0x80000000) {
+ success = 1;
+ break;
+ }
+
+ if (fail_value != 0)
+ break;
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - SDRAM calibration failed");
+ NT_LOG(ERR, FILTER,
+ "Calibration status: success 0x%08" PRIx32 " - fail 0x%08" PRIx32,
+ value, fail_value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
+{
+ int success = 0;
+
+ /*
+ * Make sure no lookup is performed during init, i.e.
+ * disable every category and disable FLM
+ */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for FLM to enter Idle state */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_IDLE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - Never idle");
+ return -1;
+ }
+
+ success = 0;
+
+ /* Start SDRAM initialization */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x1);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_INITDONE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER,
+ "FLM initialization failed - SDRAM initialization incomplete");
+ return -1;
+ }
+
+ /* Set the INIT value back to zero to clear the bit in the SW register cache */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Enable FLM */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, enable);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ int nb_rpp_per_ps = ndev->be.flm.nb_rpp_clock_in_ps;
+ int nb_load_aps_max = ndev->be.flm.nb_load_aps_max;
+ uint32_t scan_i_value = 0;
+
+ if (NTNIC_SCANNER_LOAD > 0) {
+ scan_i_value = (1 / (nb_rpp_per_ps * 0.000000000001)) /
+ (nb_load_aps_max * NTNIC_SCANNER_LOAD);
+ }
+
+ hw_mod_flm_scan_set(&ndev->be, HW_FLM_SCAN_I, scan_i_value);
+ hw_mod_flm_scan_flush(&ndev->be);
+
+ return 0;
+}
+
+
+
struct flm_flow_key_def_s {
union {
struct {
@@ -2354,11 +2477,11 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data,
- uint32_t group __rte_unused,
+ uint32_t group,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
- uint16_t *flm_rpl_ext_ptr __rte_unused,
- uint32_t *flm_ft __rte_unused,
+ uint16_t *flm_rpl_ext_ptr,
+ uint32_t *flm_ft,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
@@ -2507,6 +2630,25 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 0,
+ .group = group,
+ };
+ struct hw_db_flm_ft flm_ft_idx = empty_pattern
+ ? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
+ : hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ local_idxs[(*local_idx_counter)++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_ft)
+ *flm_ft = flm_ft_idx.id1;
+
return 0;
}
@@ -2514,7 +2656,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
- uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t num_dest_port, uint32_t num_queues,
uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
@@ -2808,6 +2950,21 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 1,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ };
+ struct hw_db_flm_ft flm_ft_idx =
+ hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -3028,6 +3185,63 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
NT_VIOLATING_MBR_QSL) < 0)
goto err_exit0;
+ /* FLM */
+ if (flm_sdram_calibrate(ndev) < 0)
+ goto err_exit0;
+
+ if (flm_sdram_reset(ndev, 1) < 0)
+ goto err_exit0;
+
+ /* Learn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LDS, 0);
+ /* Learn fail status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LFS, 1);
+ /* Learn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LIS, 1);
+ /* Unlearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UDS, 0);
+ /* Unlearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UIS, 0);
+ /* Relearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RDS, 0);
+ /* Relearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RIS, 0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RBL, 4);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Set the sliding windows size for flm load */
+ uint32_t bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) /
+ (32ULL * ndev->be.flm.nb_rpp_clock_in_ps)) -
+ 1ULL);
+ hw_mod_flm_load_bin_set(&ndev->be, HW_FLM_LOAD_BIN, bin);
+ hw_mod_flm_load_bin_flush(&ndev->be);
+
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT0,
+ 0); /* Drop at 100% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT0, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT1,
+ 14); /* Drop at 87,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT1, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT2,
+ 10); /* Drop at 62,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT2, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT3,
+ 6); /* Drop at 37,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT3, 1);
+ hw_mod_flm_prio_flush(&ndev->be);
+
+ /* TODO How to set and use these limits */
+ for (uint32_t i = 0; i < ndev->be.flm.nb_pst_profiles; ++i) {
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_BP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_PP, i,
+ NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_TP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT);
+ }
+
+ hw_mod_flm_pst_flush(&ndev->be, 0, ALL_ENTRIES);
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -3056,6 +3270,8 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
#endif
if (ndev->flow_mgnt_prepared) {
+ flm_sdram_reset(ndev, 0);
+
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
new file mode 100644
index 0000000000..8ba8b8f67a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -0,0 +1,58 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
+#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+
+/*
+ * Statistics are generated each time the byte counter crosses a limit.
+ * If BYTE_LIMIT is zero then the byte counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_LIMIT + 15) bytes
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(8 + 15) = 2^23 ~~ 8MB
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT 8
+
+/*
+ * Statistics are generated each time the packet counter crosses a limit.
+ * If PKT_LIMIT is zero then the packet counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(PKT_LIMIT + 11) pkts
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(5 + 11) = 2^16 pkts ~~ 64K pkts
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT 5
+
+/*
+ * Statistics are generated each time flow time (measured in ns) crosses a
+ * limit.
+ * If BYTE_TIMEOUT is zero then the flow time does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_TIMEOUT + 15) ns
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(23 + 15) = 2^38 ns ~~ 275 sec
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT 23
+
+/*
+ * This define sets the percentage of the full processing capacity
+ * being reserved for scan operations. The scanner is responsible
+ * for detecting aged out flows and meters with statistics timeout.
+ *
+ * A high scanner load percentage will make this detection more precise
+ * but will also give lower packet processing capacity.
+ *
+ * The percentage is given as a decimal number, e.g. 0.01 for 1%, which is the recommended value.
+ */
+#define NTNIC_SCANNER_LOAD 0.01
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 71ecd6c68c..a482fb43ad 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -16,6 +16,14 @@
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
+/*
+ * Windows size in seconds for measuring FLM load
+ * and Port load.
+ * The windows size must max be 3 min in order to
+ * prevent overflow.
+ */
+#define FLM_LOAD_WINDOWS_SIZE 2ULL
+
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
#define PCIIDENT_TO_BUSNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 8) & 0xFFU))
#define PCIIDENT_TO_DEVNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 3) & 0x1FU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 34/86] net/ntnic: add flm rcp module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (32 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 33/86] net/ntnic: add FLM module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 35/86] net/ntnic: add learn flow queue handling Serhii Iliushyk
` (52 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 133 ++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 +++++++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 20 ++
.../profile_inline/flow_api_profile_inline.c | 42 +++-
5 files changed, 390 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index de662c4ed1..13722c30a9 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -683,6 +683,10 @@ int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value);
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f5eaea7c4e..0a7e90c04f 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -579,3 +579,136 @@ int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int cou
}
return be->iface->flm_scrub_flush(be->be_dev, &be->flm, start_idx, count);
}
+
+static int hw_mod_flm_rcp_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.rcp[index], (uint8_t)*value,
+ sizeof(struct flm_v25_rcp_s));
+ break;
+
+ case HW_FLM_RCP_LOOKUP:
+ GET_SET(be->flm.v25.rcp[index].lookup, value);
+ break;
+
+ case HW_FLM_RCP_QW0_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW0_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_FLM_RCP_QW0_SEL:
+ GET_SET(be->flm.v25.rcp[index].qw0_sel, value);
+ break;
+
+ case HW_FLM_RCP_QW4_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW4_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw8_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW8_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw8_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_SEL:
+ GET_SET(be->flm.v25.rcp[index].sw8_sel, value);
+ break;
+
+ case HW_FLM_RCP_SW9_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw9_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW9_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw9_ofs, value);
+ break;
+
+ case HW_FLM_RCP_MASK:
+ if (get) {
+ memcpy(value, be->flm.v25.rcp[index].mask,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+
+ } else {
+ memcpy(be->flm.v25.rcp[index].mask, value,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+ }
+
+ break;
+
+ case HW_FLM_RCP_KID:
+ GET_SET(be->flm.v25.rcp[index].kid, value);
+ break;
+
+ case HW_FLM_RCP_OPN:
+ GET_SET(be->flm.v25.rcp[index].opn, value);
+ break;
+
+ case HW_FLM_RCP_IPN:
+ GET_SET(be->flm.v25.rcp[index].ipn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_DYN:
+ GET_SET(be->flm.v25.rcp[index].byt_dyn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_OFS:
+ GET_SET(be->flm.v25.rcp[index].byt_ofs, value);
+ break;
+
+ case HW_FLM_RCP_TXPLM:
+ GET_SET(be->flm.v25.rcp[index].txplm, value);
+ break;
+
+ case HW_FLM_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->flm.v25.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value)
+{
+ if (field != HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, value, 0);
+}
+
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ if (field == HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 61492090ce..0ae058b91e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -68,6 +68,9 @@ struct hw_db_inline_resource_db {
} *cat;
struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_flm_rcp_data data;
+ int ref;
+
struct hw_db_inline_resource_db_flm_ft {
struct hw_db_inline_flm_ft_data data;
struct hw_db_flm_ft idx;
@@ -96,6 +99,7 @@ struct hw_db_inline_resource_db {
uint32_t nb_cat;
uint32_t nb_flm_ft;
+ uint32_t nb_flm_rcp;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -164,6 +168,42 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+
+ db->nb_flm_ft = ndev->be.cat.nb_flow_types;
+ db->nb_flm_rcp = ndev->be.flm.nb_categories;
+ db->flm = calloc(db->nb_flm_rcp, sizeof(struct hw_db_inline_resource_db_flm_rcp));
+
+ if (db->flm == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ db->flm[i].ft =
+ calloc(db->nb_flm_ft, sizeof(struct hw_db_inline_resource_db_flm_ft));
+
+ if (db->flm[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].match_set =
+ calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_flm_match_set));
+
+ if (db->flm[i].match_set == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].cfn_map = calloc(db->nb_cat * db->nb_flm_ft,
+ sizeof(struct hw_db_inline_resource_db_flm_cfn_map));
+
+ if (db->flm[i].cfn_map == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
db->nb_km_ft = ndev->be.cat.nb_flow_types;
db->nb_km_rcp = ndev->be.km.nb_categories;
db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
@@ -222,6 +262,16 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cat);
+ if (db->flm) {
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ free(db->flm[i].ft);
+ free(db->flm[i].match_set);
+ free(db->flm[i].cfn_map);
+ }
+
+ free(db->flm);
+ }
+
if (db->km) {
for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
free(db->km[i].ft);
@@ -268,6 +318,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ hw_db_inline_flm_deref(ndev, db_handle, *(struct hw_db_flm_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_FLM_FT:
hw_db_inline_flm_ft_deref(ndev, db_handle,
*(struct hw_db_flm_ft *)&idxs[i]);
@@ -324,6 +378,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ return &db->flm[idxs[i].id1].data;
+
case HW_DB_IDX_TYPE_FLM_FT:
return NULL; /* FTs can't be easily looked up */
@@ -481,6 +538,20 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
return 0;
}
+static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int flm_rcp)
+{
+ uint32_t flm_mask[10];
+ memset(flm_mask, 0xff, sizeof(flm_mask));
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, flm_rcp, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, flm_rcp, 1);
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, flm_rcp, flm_mask);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, flm_rcp, flm_rcp + 2);
+
+ hw_mod_flm_rcp_flush(&ndev->be, flm_rcp, 1);
+}
+
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1268,10 +1339,17 @@ void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_d
void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
{
(void)ndev;
- (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
if (idx.error)
return;
+
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0, sizeof(struct hw_db_inline_km_rcp_data));
+ db->flm[idx.id1].ref = 0;
+ }
}
/******************************************************************************/
@@ -1359,6 +1437,121 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* FLM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_flm_compare(const struct hw_db_inline_flm_rcp_data *data1,
+ const struct hw_db_inline_flm_rcp_data *data2)
+{
+ if (data1->qw0_dyn != data2->qw0_dyn || data1->qw0_ofs != data2->qw0_ofs ||
+ data1->qw4_dyn != data2->qw4_dyn || data1->qw4_ofs != data2->qw4_ofs ||
+ data1->sw8_dyn != data2->sw8_dyn || data1->sw8_ofs != data2->sw8_ofs ||
+ data1->sw9_dyn != data2->sw9_dyn || data1->sw9_ofs != data2->sw9_ofs ||
+ data1->outer_prot != data2->outer_prot || data1->inner_prot != data2->inner_prot) {
+ return 0;
+ }
+
+ for (int i = 0; i < 10; ++i)
+ if (data1->mask[i] != data2->mask[i])
+ return 0;
+
+ return 1;
+}
+
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_idx idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_RCP;
+ idx.id1 = group;
+
+ if (group == 0)
+ return idx;
+
+ if (db->flm[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_compare(data, &db->flm[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ref(ndev, db, idx);
+ return idx;
+ }
+
+ db->flm[idx.id1].ref = 1;
+ memcpy(&db->flm[idx.id1].data, data, sizeof(struct hw_db_inline_flm_rcp_data));
+
+ {
+ uint32_t flm_mask[10] = {
+ data->mask[0], /* SW9 */
+ data->mask[1], /* SW8 */
+ data->mask[5], data->mask[4], data->mask[3], data->mask[2], /* QW4 */
+ data->mask[9], data->mask[8], data->mask[7], data->mask[6], /* QW0 */
+ };
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, idx.id1, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, idx.id1, 1);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_DYN, idx.id1, data->qw0_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_OFS, idx.id1, data->qw0_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_DYN, idx.id1, data->qw4_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_OFS, idx.id1, data->qw4_ofs);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_DYN, idx.id1, data->sw8_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_OFS, idx.id1, data->sw8_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_DYN, idx.id1, data->sw9_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_OFS, idx.id1, data->sw9_ofs);
+
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, idx.id1, flm_mask);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, idx.id1, idx.id1 + 2);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_OPN, idx.id1, data->outer_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_IPN, idx.id1, data->inner_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_DYN, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_OFS, idx.id1, -20);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_TXPLM, idx.id1, UINT32_MAX);
+
+ hw_mod_flm_rcp_flush(&ndev->be, idx.id1, 1);
+ }
+
+ return idx;
+}
+
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->flm[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ if (idx.id1 > 0) {
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_flm_rcp_data));
+ db->flm[idx.id1].ref = 0;
+
+ hw_db_inline_setup_default_flm_rcp(ndev, idx.id1);
+ }
+ }
+}
+
/******************************************************************************/
/* FLM FT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a520ae1769..9820225ffa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -138,6 +138,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE,
HW_DB_IDX_TYPE_TPE_EXT,
+ HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
@@ -165,6 +166,22 @@ struct hw_db_inline_cat_data {
uint8_t ip_prot_tunnel;
};
+struct hw_db_inline_flm_rcp_data {
+ uint64_t qw0_dyn : 5;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 5;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 5;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 5;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_prot : 1;
+ uint64_t inner_prot : 1;
+ uint64_t padding : 10;
+
+ uint32_t mask[10];
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -300,7 +317,10 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group);
void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_flm_ft_data *data);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 85adcbb2d9..94635d7aaf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -101,6 +101,11 @@ static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
hw_mod_flm_control_flush(&ndev->be);
+ for (uint32_t i = 1; i < ndev->be.flm.nb_categories; ++i)
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, i, 0x0);
+
+ hw_mod_flm_rcp_flush(&ndev->be, 1, ndev->be.flm.nb_categories - 1);
+
/* Wait for FLM to enter Idle state */
for (uint32_t i = 0; i < 1000000; ++i) {
uint32_t value = 0;
@@ -2657,8 +2662,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port, uint32_t num_queues,
- uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
- struct flm_flow_key_def_s *key_def __rte_unused)
+ uint32_t *packet_data, uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
@@ -2691,6 +2696,31 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
* Flow for group 1..32
*/
+ /* Setup FLM RCP */
+ struct hw_db_inline_flm_rcp_data flm_data = {
+ .qw0_dyn = key_def->qw0_dyn,
+ .qw0_ofs = key_def->qw0_ofs,
+ .qw4_dyn = key_def->qw4_dyn,
+ .qw4_ofs = key_def->qw4_ofs,
+ .sw8_dyn = key_def->sw8_dyn,
+ .sw8_ofs = key_def->sw8_ofs,
+ .sw9_dyn = key_def->sw9_dyn,
+ .sw9_ofs = key_def->sw9_ofs,
+ .outer_prot = key_def->outer_proto,
+ .inner_prot = key_def->inner_proto,
+ };
+ memcpy(flm_data.mask, packet_mask, sizeof(uint32_t) * 10);
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, &flm_data,
+ attr->group);
+ fh->db_idxs[fh->db_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup Actions */
uint16_t flm_rpl_ext_ptr = 0;
uint32_t flm_ft = 0;
@@ -2703,7 +2733,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
/* Program flow */
- convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ convert_fh_to_fh_flm(fh, packet_data, flm_idx.id1 + 2, flm_ft, flm_rpl_ext_ptr,
flm_scrub, attr->priority & 0x3);
flm_flow_programming(fh, NT_FLM_OP_LEARN);
@@ -3275,6 +3305,12 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, 0, 0);
+ hw_mod_flm_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
+ flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 35/86] net/ntnic: add learn flow queue handling
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (33 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 34/86] net/ntnic: add flm rcp module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 36/86] net/ntnic: match and action db attributes were added Serhii Iliushyk
` (51 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements thread for handling flow learn queue
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 5 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 33 +++++++
.../flow_api/profile_inline/flm_lrn_queue.c | 42 +++++++++
.../flow_api/profile_inline/flm_lrn_queue.h | 11 +++
.../profile_inline/flow_api_profile_inline.c | 48 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 94 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 241 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 13722c30a9..17d5755634 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,11 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt);
+
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
struct hsh_func_s {
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8017aa4fc3..8ebdd98db0 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -14,6 +14,7 @@ typedef struct ntdrv_4ga_s {
char *p_drv_name;
volatile bool b_shutdown;
+ rte_thread_t flm_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 0a7e90c04f..f4c29b8bde 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,3 +712,36 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ int ret = 0;
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_LRN_DATA:
+ ret = be->iface->flm_lrn_data_flush(be->be_dev, &be->flm, value, records,
+ handled_records,
+ (sizeof(struct flm_v25_lrn_data_s) /
+ sizeof(uint32_t)),
+ inf_word_cnt, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
index ad7efafe08..6e77c28f93 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -13,8 +13,28 @@
#include "flm_lrn_queue.h"
+#define QUEUE_SIZE (1 << 13)
+
#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+void *flm_lrn_queue_create(void)
+{
+ static_assert((ELEM_SIZE & ~(size_t)3) == ELEM_SIZE, "FLM LEARN struct size");
+ struct rte_ring *q = rte_ring_create_elem("RFQ",
+ ELEM_SIZE,
+ QUEUE_SIZE,
+ SOCKET_ID_ANY,
+ RING_F_MP_HTS_ENQ | RING_F_SC_DEQ);
+ assert(q != NULL);
+ return q;
+}
+
+void flm_lrn_queue_free(void *q)
+{
+ if (q)
+ rte_ring_free(q);
+}
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q)
{
struct rte_ring_zc_data zcd;
@@ -26,3 +46,25 @@ void flm_lrn_queue_release_write_buffer(void *q)
{
rte_ring_enqueue_zc_elem_finish(q, 1);
}
+
+read_record flm_lrn_queue_get_read_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ read_record rr;
+
+ if (rte_ring_dequeue_zc_burst_elem_start(q, ELEM_SIZE, QUEUE_SIZE, &zcd, NULL) != 0) {
+ rr.num = zcd.n1;
+ rr.p = zcd.ptr1;
+
+ } else {
+ rr.num = 0;
+ rr.p = NULL;
+ }
+
+ return rr;
+}
+
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num)
+{
+ rte_ring_dequeue_zc_elem_finish(q, num);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
index 8cee0c8e78..40558f4201 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -8,7 +8,18 @@
#include <stdint.h>
+typedef struct read_record {
+ uint32_t *p;
+ uint32_t num;
+} read_record;
+
+void *flm_lrn_queue_create(void);
+void flm_lrn_queue_free(void *q);
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q);
void flm_lrn_queue_release_write_buffer(void *q);
+read_record flm_lrn_queue_get_read_buffer(void *q);
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num);
+
#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 94635d7aaf..6ad9f53954 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -39,6 +39,48 @@
static void *flm_lrn_queue_arr;
+static void flm_setup_queues(void)
+{
+ flm_lrn_queue_arr = flm_lrn_queue_create();
+ assert(flm_lrn_queue_arr != NULL);
+}
+
+static void flm_free_queues(void)
+{
+ flm_lrn_queue_free(flm_lrn_queue_arr);
+}
+
+static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ read_record r = flm_lrn_queue_get_read_buffer(flm_lrn_queue_arr);
+
+ if (r.num) {
+ uint32_t handled_records = 0;
+
+ if (hw_mod_flm_lrn_data_set_flush(&dev->ndev->be, HW_FLM_FLOW_LRN_DATA, r.p, r.num,
+ &handled_records, inf_word_cnt, sta_word_cnt)) {
+ NT_LOG(ERR, FILTER, "Flow programming failed");
+
+ } else if (handled_records > 0) {
+ flm_lrn_queue_release_read_buffer(flm_lrn_queue_arr, handled_records);
+ }
+ }
+
+ return r.num;
+}
+
+static uint32_t flm_update(struct flow_eth_dev *dev)
+{
+ static uint32_t inf_word_cnt;
+ static uint32_t sta_word_cnt;
+
+ if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
+ return 1;
+
+ return inf_word_cnt + sta_word_cnt;
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -4218,6 +4260,12 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * NT Flow FLM Meter API
+ */
+ .flm_setup_queues = flm_setup_queues,
+ .flm_free_queues = flm_free_queues,
+ .flm_update = flm_update,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a509a8eb51..bfca8f28b1 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -24,6 +24,11 @@
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
+#define THREAD_JOIN(a) rte_thread_join(a, NULL)
+#define THREAD_FUNC static uint32_t
+#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
@@ -120,6 +125,16 @@ store_pdrv(struct drv_s *p_drv)
rte_spinlock_unlock(&hwlock);
}
+static void clear_pdrv(struct drv_s *p_drv)
+{
+ if (p_drv->adapter_no > NUM_ADAPTER_MAX)
+ return;
+
+ rte_spinlock_lock(&hwlock);
+ _g_p_drv[p_drv->adapter_no] = NULL;
+ rte_spinlock_unlock(&hwlock);
+}
+
static struct drv_s *
get_pdrv_from_pci(struct rte_pci_addr addr)
{
@@ -1240,6 +1255,13 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
static void
drv_deinit(struct drv_s *p_drv)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return;
+ }
+
const struct adapter_ops *adapter_ops = get_adapter_ops();
if (adapter_ops == NULL) {
@@ -1251,6 +1273,22 @@ drv_deinit(struct drv_s *p_drv)
return;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ fpga_info_t *fpga_info = &p_nt_drv->adapter_info.fpga_info;
+
+ /*
+ * Mark the global pdrv for cleared. Used by some threads to terminate.
+ * 1 second to give the threads a chance to see the termonation.
+ */
+ clear_pdrv(p_drv);
+ nt_os_wait_usec(1000000);
+
+ /* stop statistics threads */
+ p_drv->ntdrv.b_shutdown = true;
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ }
/* stop adapter */
adapter_ops->deinit(&p_nt_drv->adapter_info);
@@ -1359,6 +1397,43 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.promiscuous_enable = promiscuous_enable,
};
+/*
+ * Adapter flm stat thread
+ */
+THREAD_FUNC adapter_flm_update_thread_fn(void *context)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: profile_inline module uninitialized", __func__);
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct nt4ga_filter_s *p_nt4ga_filter = &p_adapter_info->nt4ga_filter;
+ struct flow_nic_dev *p_flow_nic_dev = p_nt4ga_filter->mp_flow_device;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: waiting for port configuration",
+ p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (p_flow_nic_dev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ struct flow_eth_dev *dev = p_flow_nic_dev->eth_base;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: begin", p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (!p_drv->ntdrv.b_shutdown)
+ if (profile_inline_ops->flm_update(dev) == 0)
+ nt_os_wait_usec(10);
+
+ NT_LOG(DBG, NTNIC, "%s: %s: end", p_adapter_info->mp_adapter_id_str, __func__);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1369,6 +1444,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* Return statement is not necessary here to allow traffic processing by SW */
}
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1597,6 +1679,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (profile_inline_ops != NULL && fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ profile_inline_ops->flm_setup_queues();
+ res = THREAD_CTRL_CREATE(&p_nt_drv->flm_thread, "ntnic-nt_flm_update_thr",
+ adapter_flm_update_thread_fn, (void *)p_drv);
+
+ if (res) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1069be2f85..27d6cbef01 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -256,6 +256,13 @@ struct profile_inline_ops {
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+
+ /*
+ * NT Flow FLM queue API
+ */
+ void (*flm_setup_queues)(void);
+ void (*flm_free_queues)(void);
+ uint32_t (*flm_update)(struct flow_eth_dev *dev);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 36/86] net/ntnic: match and action db attributes were added
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (34 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 35/86] net/ntnic: add learn flow queue handling Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 37/86] net/ntnic: add flow dump feature Serhii Iliushyk
` (50 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements match/action dereferencing
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_hw_db_inline.c | 795 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 35 +
.../profile_inline/flow_api_profile_inline.c | 55 ++
3 files changed, 885 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 0ae058b91e..52f85b65af 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,9 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_INLINE_ACTION_SET_NB 512
+#define HW_DB_INLINE_MATCH_SET_NB 512
+
#define HW_DB_FT_LOOKUP_KEY_A 0
#define HW_DB_FT_TYPE_KM 1
@@ -110,6 +113,20 @@ struct hw_db_inline_resource_db {
int cfn_hw;
int ref;
} *cfn;
+
+ uint32_t cfn_priority_counter;
+ uint32_t set_priority_counter;
+
+ struct hw_db_inline_resource_db_action_set {
+ struct hw_db_inline_action_set_data data;
+ int ref;
+ } action_set[HW_DB_INLINE_ACTION_SET_NB];
+
+ struct hw_db_inline_resource_db_match_set {
+ struct hw_db_inline_match_set_data data;
+ int ref;
+ uint32_t set_priority;
+ } match_set[HW_DB_INLINE_MATCH_SET_NB];
};
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
@@ -292,6 +309,16 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ hw_db_inline_match_set_deref(ndev, db_handle,
+ *(struct hw_db_match_set_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ hw_db_inline_action_set_deref(ndev, db_handle,
+ *(struct hw_db_action_set_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_CAT:
hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
break;
@@ -360,6 +387,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_NONE:
return NULL;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ return &db->match_set[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ return &db->action_set[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_CAT:
return &db->cat[idxs[i].ids].data;
@@ -552,6 +585,763 @@ static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int fl
}
+static void hw_db_copy_ft(struct flow_nic_dev *ndev, int type, int cfn_dst, int cfn_src,
+ int lookup, int flow_type)
+{
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index_dst = (8 * flow_type + cfn_dst / cat_funcs) * max_lookups + lookup;
+ int fte_field_dst = cfn_dst % cat_funcs;
+
+ int fte_index_src = (8 * flow_type + cfn_src / cat_funcs) * max_lookups + lookup;
+ int fte_field_src = cfn_src % cat_funcs;
+
+ uint32_t current_bm_dst = 0;
+ uint32_t current_bm_src = 0;
+ uint32_t fte_field_bm_dst = 1 << fte_field_dst;
+ uint32_t fte_field_bm_src = 1 << fte_field_src;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t enable = current_bm_src & fte_field_bm_src;
+ uint32_t final_bm_dst = enable ? (fte_field_bm_dst | current_bm_dst)
+ : (~fte_field_bm_dst & current_bm_dst);
+
+ if (current_bm_dst != final_bm_dst) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+
+static int hw_db_inline_filter_apply(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id,
+ struct hw_db_match_set_idx match_set_idx,
+ struct hw_db_flm_ft flm_ft_idx,
+ struct hw_db_action_set_idx action_set_idx)
+{
+ (void)match_set_idx;
+ (void)flm_ft_idx;
+
+ const struct hw_db_inline_match_set_data *match_set =
+ &db->match_set[match_set_idx.ids].data;
+ const struct hw_db_inline_cat_data *cat = &db->cat[match_set->cat.ids].data;
+
+ const int km_ft = match_set->km_ft.id1;
+ const int km_rcp = (int)db->km[match_set->km.id1].data.rcp;
+
+ const int flm_ft = flm_ft_idx.id1;
+ const int flm_rcp = flm_ft_idx.id2;
+
+ const struct hw_db_inline_action_set_data *action_set =
+ &db->action_set[action_set_idx.ids].data;
+ const struct hw_db_inline_cot_data *cot = &db->cot[action_set->cot.ids].data;
+
+ const int qsl_hw_id = action_set->qsl.ids;
+ const int slc_lr_hw_id = action_set->slc_lr.ids;
+ const int tpe_hw_id = action_set->tpe.ids;
+ const int hsh_hw_id = action_set->hsh.ids;
+
+ /* Setup default FLM RCP if needed */
+ if (flm_rcp > 0 && db->flm[flm_rcp].ref <= 0)
+ hw_db_inline_setup_default_flm_rcp(ndev, flm_rcp);
+
+ /* Setup CAT.CFN */
+ {
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x0);
+
+ /* Protocol checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_ISL, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_CFP, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MAC, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L2, cat_hw_id, 0, cat->ptc_mask_l2);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VNTAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VLAN, cat_hw_id, 0, cat->vlan_mask);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, cat->ptc_mask_l3);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_FRAG, cat_hw_id, 0,
+ cat->ptc_mask_frag);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_IP_PROT, cat_hw_id, 0, cat->ip_prot);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L4, cat_hw_id, 0, cat->ptc_mask_l4);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TUNNEL, cat_hw_id, 0,
+ cat->ptc_mask_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L2, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_VLAN, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L3, cat_hw_id, 0,
+ cat->ptc_mask_l3_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_FRAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_IP_PROT, cat_hw_id, 0,
+ cat->ip_prot_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L4, cat_hw_id, 0,
+ cat->ptc_mask_l4_tunnel);
+
+ /* Error checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_CV, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_FCS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TRUNC, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L3_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L4_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L3_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L4_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl_tunnel);
+
+ /* MAC port check */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_MAC_PORT, cat_hw_id, 0,
+ cat->mac_port_mask);
+
+ /* Pattern match checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMP, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_DCT, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_EXT_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMB, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_AND_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_OR_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_INV, cat_hw_id, 0, -1);
+
+ /* Length checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC_INV, cat_hw_id, 0, -1);
+
+ /* KM and FLM */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3);
+
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 0, cat_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 0, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 1, hsh_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 2, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 2,
+ slc_lr_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 5, tpe_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 5, 0);
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id,
+ 0x001 | 0x004 | (qsl_hw_id ? 0x008 : 0) |
+ (slc_lr_hw_id ? 0x020 : 0) | 0x040 |
+ (tpe_hw_id ? 0x400 : 0));
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ km_rcp);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ flm_rcp);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, flm_ft, 1);
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COLOR, cat_hw_id, cot->frag_rcp << 10);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_KM, cat_hw_id,
+ cot->matcher_color_contrib);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ return 0;
+}
+
+static void hw_db_inline_filter_clear(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id)
+{
+ /* Setup CAT.CFN */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < 6; ++i) {
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + i, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + i, 0);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0);
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft,
+ 0);
+ }
+ }
+
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+}
+
+static void hw_db_inline_filter_copy(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db, int cfn_dst, int cfn_src)
+{
+ uint32_t val = 0;
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_COPY_FROM, cfn_dst, 0, cfn_src);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < offset; ++i) {
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_dst + i, val);
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_dst + i, val);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cfn_dst, offset);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_get(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_src, &val);
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_dst, val);
+ hw_mod_cat_cte_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_km_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_KM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_flm_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_C, ft);
+ }
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COPY_FROM, cfn_dst, cfn_src);
+ hw_mod_cat_cot_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+}
+
+/*
+ * Algorithm for moving CFN entries to make space with respect of priority.
+ * The algorithm will make the fewest possible moves to fit a new CFN entry.
+ */
+static int hw_db_inline_alloc_prioritized_cfn(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ struct hw_db_match_set_idx match_set_idx)
+{
+ const struct hw_db_inline_resource_db_match_set *match_set =
+ &db->match_set[match_set_idx.ids];
+
+ uint64_t priority = ((uint64_t)(match_set->data.priority & 0xff) << 56) |
+ ((uint64_t)(0xffffff - (match_set->set_priority & 0xffffff)) << 32) |
+ (0xffffffff - ++db->cfn_priority_counter);
+
+ int db_cfn_idx = -1;
+
+ struct {
+ uint64_t priority;
+ uint32_t idx;
+ } sorted_priority[db->nb_cat];
+
+ memset(sorted_priority, 0x0, sizeof(sorted_priority));
+
+ uint32_t in_use_count = 0;
+
+ for (uint32_t i = 1; i < db->nb_cat; ++i) {
+ if (db->cfn[i].ref > 0) {
+ sorted_priority[db->cfn[i].cfn_hw].priority = db->cfn[i].priority;
+ sorted_priority[db->cfn[i].cfn_hw].idx = i;
+ in_use_count += 1;
+
+ } else if (db_cfn_idx == -1) {
+ db_cfn_idx = (int)i;
+ }
+ }
+
+ if (in_use_count >= db->nb_cat - 1)
+ return -1;
+
+ if (in_use_count == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = 1;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ int goal = 1;
+ int free_before = -1000000;
+ int free_after = 1000000;
+ int found_smaller = 0;
+
+ for (int i = 1; i < (int)db->nb_cat; ++i) {
+ if (sorted_priority[i].priority > priority) { /* Bigger */
+ goal = i + 1;
+
+ } else if (sorted_priority[i].priority == 0) { /* Not set */
+ if (found_smaller) {
+ if (free_after > i)
+ free_after = i;
+
+ } else {
+ free_before = i;
+ }
+
+ } else {/* Smaller */
+ found_smaller = 1;
+ }
+ }
+
+ int diff_before = goal - free_before - 1;
+ int diff_after = free_after - goal;
+
+ if (goal < (int)db->nb_cat && sorted_priority[goal].priority == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ if (diff_after <= diff_before) {
+ for (int i = free_after; i > goal; --i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i - 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+
+ } else {
+ goal -= 1;
+
+ for (int i = free_before; i < goal; ++i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i + 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+ }
+
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+
+ return db_cfn_idx;
+}
+
+static void hw_db_inline_free_prioritized_cfn(struct hw_db_inline_resource_db *db, int cfn_hw)
+{
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (db->cfn[i].cfn_hw == cfn_hw) {
+ memset(&db->cfn[i], 0x0, sizeof(struct hw_db_inline_resource_db_cfn));
+ break;
+ }
+ }
+}
+
+static void hw_db_inline_update_active_filters(struct flow_nic_dev *ndev, void *db_handle,
+ int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[group];
+ struct hw_db_inline_resource_db_flm_cfn_map *cell;
+
+ for (uint32_t match_set_idx = 0; match_set_idx < db->nb_cat; ++match_set_idx) {
+ for (uint32_t ft_idx = 0; ft_idx < db->nb_flm_ft; ++ft_idx) {
+ int active = flm_rcp->ft[ft_idx].ref > 0 &&
+ flm_rcp->match_set[match_set_idx].ref > 0;
+ cell = &flm_rcp->cfn_map[match_set_idx * db->nb_flm_ft + ft_idx];
+
+ if (active && cell->cfn_idx == 0) {
+ /* Setup filter */
+ cell->cfn_idx = hw_db_inline_alloc_prioritized_cfn(ndev, db,
+ flm_rcp->match_set[match_set_idx].idx);
+ hw_db_inline_filter_apply(ndev, db, db->cfn[cell->cfn_idx].cfn_hw,
+ flm_rcp->match_set[match_set_idx].idx,
+ flm_rcp->ft[ft_idx].idx,
+ group == 0
+ ? db->match_set[flm_rcp->match_set[match_set_idx]
+ .idx.ids]
+ .data.action_set
+ : flm_rcp->ft[ft_idx].data.action_set);
+ }
+
+ if (!active && cell->cfn_idx > 0) {
+ /* Teardown filter */
+ hw_db_inline_filter_clear(ndev, db, db->cfn[cell->cfn_idx].cfn_hw);
+ hw_db_inline_free_prioritized_cfn(db,
+ db->cfn[cell->cfn_idx].cfn_hw);
+ cell->cfn_idx = 0;
+ }
+ }
+ }
+}
+
+
+/******************************************************************************/
+/* Match set */
+/******************************************************************************/
+
+static int hw_db_inline_match_set_compare(const struct hw_db_inline_match_set_data *data1,
+ const struct hw_db_inline_match_set_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->km_ft.raw == data2->km_ft.raw && data1->jump == data2->jump;
+}
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_match_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_MATCH_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_MATCH_SET_NB; ++i) {
+ if (!found && db->match_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->match_set[i].ref > 0 &&
+ hw_db_inline_match_set_compare(data, &db->match_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_match_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ found = 0;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].ref <= 0) {
+ found = 1;
+ flm_rcp->match_set[i].ref = 1;
+ flm_rcp->match_set[i].idx.raw = idx.raw;
+ break;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->match_set[idx.ids].data, data, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 1;
+ db->match_set[idx.ids].set_priority = ++db->set_priority_counter;
+
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
+ return idx;
+}
+
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->match_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+ int jump;
+
+ if (idx.error)
+ return;
+
+ db->match_set[idx.ids].ref -= 1;
+
+ if (db->match_set[idx.ids].ref > 0)
+ return;
+
+ jump = db->match_set[idx.ids].data.jump;
+ flm_rcp = &db->flm[jump];
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].idx.raw == idx.raw) {
+ flm_rcp->match_set[i].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, jump);
+ memset(&flm_rcp->match_set[i], 0x0,
+ sizeof(struct hw_db_inline_resource_db_flm_match_set));
+ }
+ }
+
+ memset(&db->match_set[idx.ids].data, 0x0, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 0;
+}
+
+/******************************************************************************/
+/* Action set */
+/******************************************************************************/
+
+static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_data *data1,
+ const struct hw_db_inline_action_set_data *data2)
+{
+ if (data1->contains_jump)
+ return data2->contains_jump && data1->jump == data2->jump;
+
+ return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
+ data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
+ data1->hsh.raw == data2->hsh.raw;
+}
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_action_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_ACTION_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_ACTION_SET_NB; ++i) {
+ if (!found && db->action_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->action_set[i].ref > 0 &&
+ hw_db_inline_action_set_compare(data, &db->action_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_action_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->action_set[idx.ids].data, data, sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->action_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->action_set[idx.ids].ref -= 1;
+
+ if (db->action_set[idx.ids].ref <= 0) {
+ memset(&db->action_set[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1593,6 +2383,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
return idx;
}
@@ -1647,6 +2439,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->group);
+
return idx;
}
@@ -1677,6 +2471,7 @@ void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struc
return;
flm_rcp->ft[idx.id1].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, idx.id2);
memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 9820225ffa..33de674b72 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -131,6 +131,10 @@ struct hw_db_hsh_idx {
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
+
+ HW_DB_IDX_TYPE_MATCH_SET,
+ HW_DB_IDX_TYPE_ACTION_SET,
+
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
@@ -145,6 +149,17 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_HSH,
};
+/* Container types */
+struct hw_db_inline_match_set_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_km_ft km_ft;
+ struct hw_db_action_set_idx action_set;
+ int jump;
+
+ uint8_t priority;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -224,6 +239,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
@@ -262,6 +278,25 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data);
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data);
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+
+/**/
+
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_cot_data *data);
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6ad9f53954..2afc7447d4 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2677,10 +2677,30 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup Action Set */
+ struct hw_db_inline_action_set_data action_set_data = {
+ .contains_jump = 0,
+ .cot = cot_idx,
+ .qsl = qsl_idx,
+ .slc_lr = slc_lr_idx,
+ .tpe = tpe_idx,
+ .hsh = hsh_idx,
+ };
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
+ local_idxs[(*local_idx_counter)++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 0,
.group = group,
+ .action_set = action_set_idx,
};
struct hw_db_flm_ft flm_ft_idx = empty_pattern
? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
@@ -2867,6 +2887,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
}
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &action_set_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup CAT */
struct hw_db_inline_cat_data cat_data = {
.vlan_mask = (0xf << fd->vlans) & 0xf,
@@ -2986,6 +3018,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
struct hw_db_inline_km_ft_data km_ft_data = {
.cat = cat_idx,
.km = km_idx,
+ .action_set = action_set_idx,
};
struct hw_db_km_ft km_ft_idx =
hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
@@ -3022,10 +3055,32 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup Match Set */
+ struct hw_db_inline_match_set_data match_set_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ .km_ft = km_ft_idx,
+ .action_set = action_set_idx,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .priority = attr->priority & 0xff,
+ };
+ struct hw_db_match_set_idx match_set_idx =
+ hw_db_inline_match_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &match_set_data);
+ fh->db_idxs[fh->db_idx_counter++] = match_set_idx.raw;
+
+ if (match_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Match Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 1,
.jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .action_set = action_set_idx,
+
};
struct hw_db_flm_ft flm_ft_idx =
hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 37/86] net/ntnic: add flow dump feature
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (35 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 36/86] net/ntnic: match and action db attributes were added Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 38/86] net/ntnic: add flow flush Serhii Iliushyk
` (49 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add posibilyty to dump flow in human readable format
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 17 ++
.../profile_inline/flow_api_hw_db_inline.c | 264 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 3 +
.../profile_inline/flow_api_profile_inline.c | 81 ++++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 29 ++
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
8 files changed, 413 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index e52363f04e..155a9e1fd6 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -281,6 +281,8 @@ struct flow_handle {
struct flow_handle *next;
struct flow_handle *prev;
+ /* Flow specific pointer to application data stored during action creation. */
+ void *context;
void *user_data;
union {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 043e4244fc..7f1e311988 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1006,6 +1006,22 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
return 0;
}
+static int flow_dev_dump(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_dev_dump_profile_inline(dev, flow, caller_id, file, error);
+}
+
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf)
{
@@ -1031,6 +1047,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_dev_dump = flow_dev_dump,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 52f85b65af..b5fee67e67 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -372,6 +372,270 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ char str_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(str_buffer);
+
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_MATCH_SET: {
+ const struct hw_db_inline_match_set_data *data =
+ &db->match_set[idxs[i].ids].data;
+ fprintf(file, " MATCH_SET %d, priority %d\n", idxs[i].ids,
+ (int)data->priority);
+ fprintf(file, " CAT id %d, KM id %d, KM_FT id %d, ACTION_SET id %d\n",
+ data->cat.ids, data->km.id1, data->km_ft.id1,
+ data->action_set.ids);
+
+ if (data->jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_ACTION_SET: {
+ const struct hw_db_inline_action_set_data *data =
+ &db->action_set[idxs[i].ids].data;
+ fprintf(file, " ACTION_SET %d\n", idxs[i].ids);
+
+ if (data->contains_jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ else
+ fprintf(file,
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ data->cot.ids, data->qsl.ids, data->slc_lr.ids,
+ data->tpe.ids, data->hsh.ids);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_CAT: {
+ const struct hw_db_inline_cat_data *data = &db->cat[idxs[i].ids].data;
+ fprintf(file, " CAT %d\n", idxs[i].ids);
+ fprintf(file, " Port msk 0x%02x, VLAN msk 0x%02x\n",
+ (int)data->mac_port_mask, (int)data->vlan_mask);
+ fprintf(file,
+ " Proto msks: Frag 0x%02x, l2 0x%02x, l3 0x%02x, l4 0x%02x, l3t 0x%02x, l4t 0x%02x\n",
+ (int)data->ptc_mask_frag, (int)data->ptc_mask_l2,
+ (int)data->ptc_mask_l3, (int)data->ptc_mask_l4,
+ (int)data->ptc_mask_l3_tunnel, (int)data->ptc_mask_l4_tunnel);
+ fprintf(file, " IP protocol: pn %u pnt %u\n", data->ip_prot,
+ data->ip_prot_tunnel);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_QSL: {
+ const struct hw_db_inline_qsl_data *data = &db->qsl[idxs[i].ids].data;
+ fprintf(file, " QSL %d\n", idxs[i].ids);
+
+ if (data->discard) {
+ fprintf(file, " Discard\n");
+ break;
+ }
+
+ if (data->drop) {
+ fprintf(file, " Drop\n");
+ break;
+ }
+
+ fprintf(file, " Table size %d\n", data->table_size);
+
+ for (uint32_t i = 0;
+ i < data->table_size && i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ fprintf(file, " %u: Queue %d, TX port %d\n", i,
+ (data->table[i].queue_en ? (int)data->table[i].queue : -1),
+ (data->table[i].tx_port_en ? (int)data->table[i].tx_port
+ : -1));
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_COT: {
+ const struct hw_db_inline_cot_data *data = &db->cot[idxs[i].ids].data;
+ fprintf(file, " COT %d\n", idxs[i].ids);
+ fprintf(file, " Color contrib %d, frag rcp %d\n",
+ (int)data->matcher_color_contrib, (int)data->frag_rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_SLC_LR: {
+ const struct hw_db_inline_slc_lr_data *data =
+ &db->slc_lr[idxs[i].ids].data;
+ fprintf(file, " SLC_LR %d\n", idxs[i].ids);
+ fprintf(file, " Enable %u, dyn %u, ofs %u\n", data->head_slice_en,
+ data->head_slice_dyn, data->head_slice_ofs);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE: {
+ const struct hw_db_inline_tpe_data *data = &db->tpe[idxs[i].ids].data;
+ fprintf(file, " TPE %d\n", idxs[i].ids);
+ fprintf(file, " Insert len %u, new outer %u, calc eth %u\n",
+ data->insert_len, data->new_outer,
+ data->calc_eth_type_from_inner_ip);
+ fprintf(file, " TTL enable %u, dyn %u, ofs %u\n", data->ttl_en,
+ data->ttl_dyn, data->ttl_ofs);
+ fprintf(file,
+ " Len A enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_a_en, data->len_a_pos_dyn, data->len_a_pos_ofs,
+ data->len_a_add_dyn, data->len_a_add_ofs, data->len_a_sub_dyn);
+ fprintf(file,
+ " Len B enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_b_en, data->len_b_pos_dyn, data->len_b_pos_ofs,
+ data->len_b_add_dyn, data->len_b_add_ofs, data->len_b_sub_dyn);
+ fprintf(file,
+ " Len C enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_c_en, data->len_c_pos_dyn, data->len_c_pos_ofs,
+ data->len_c_add_dyn, data->len_c_add_ofs, data->len_c_sub_dyn);
+
+ for (uint32_t i = 0; i < 6; ++i)
+ if (data->writer[i].en)
+ fprintf(file,
+ " Writer %i: Reader %u, dyn %u, ofs %u, len %u\n",
+ i, data->writer[i].reader_select,
+ data->writer[i].dyn, data->writer[i].ofs,
+ data->writer[i].len);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE_EXT: {
+ const struct hw_db_inline_tpe_ext_data *data =
+ &db->tpe_ext[idxs[i].ids].data;
+ const int rpl_rpl_length = ((int)data->size + 15) / 16;
+ fprintf(file, " TPE_EXT %d\n", idxs[i].ids);
+ fprintf(file, " Encap data, size %u\n", data->size);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ fprintf(file, " ");
+
+ for (int n = 15; n >= 0; --n)
+ fprintf(file, " %02x%s", data->hdr8[i * 16 + n],
+ n == 8 ? " " : "");
+
+ fprintf(file, "\n");
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_RCP: {
+ const struct hw_db_inline_flm_rcp_data *data = &db->flm[idxs[i].id1].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " QW0 dyn %u, ofs %u, QW4 dyn %u, ofs %u\n",
+ data->qw0_dyn, data->qw0_ofs, data->qw4_dyn, data->qw4_ofs);
+ fprintf(file, " SW8 dyn %u, ofs %u, SW9 dyn %u, ofs %u\n",
+ data->sw8_dyn, data->sw8_ofs, data->sw9_dyn, data->sw9_ofs);
+ fprintf(file, " Outer prot %u, inner prot %u\n", data->outer_prot,
+ data->inner_prot);
+ fprintf(file, " Mask:\n");
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[0],
+ data->mask[1], data->mask[2], data->mask[3], data->mask[4]);
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[5],
+ data->mask[6], data->mask[7], data->mask[8], data->mask[9]);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_FT: {
+ const struct hw_db_inline_flm_ft_data *data =
+ &db->flm[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " FLM_FT %d\n", idxs[i].id1);
+
+ if (data->is_group_zero)
+ fprintf(file, " Jump to %d\n", data->jump);
+
+ else
+ fprintf(file, " Group %d\n", data->group);
+
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_RCP: {
+ const struct hw_db_inline_km_rcp_data *data = &db->km[idxs[i].id1].data;
+ fprintf(file, " KM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " HW id %u\n", data->rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_FT: {
+ const struct hw_db_inline_km_ft_data *data =
+ &db->km[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " KM_FT %d\n", idxs[i].id1);
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ fprintf(file, " KM_RCP id %d\n", data->km.ids);
+ fprintf(file, " CAT id %d\n", data->cat.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_HSH: {
+ const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
+ fprintf(file, " HSH %d\n", idxs[i].ids);
+
+ switch (data->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ fprintf(file, " Func: NTH10\n");
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ fprintf(file, " Func: Toeplitz\n");
+ fprintf(file, " Key:");
+
+ for (uint8_t i = 0; i < MAX_RSS_KEY_LEN; i++) {
+ if (i % 10 == 0)
+ fprintf(file, "\n ");
+
+ fprintf(file, " %02x", data->key[i]);
+ }
+
+ fprintf(file, "\n");
+ break;
+
+ default:
+ fprintf(file, " Func: %u\n", data->func);
+ }
+
+ fprintf(file, " Hash mask hex:\n");
+ fprintf(file, " %016lx\n", data->hash_mask);
+
+ /* convert hash mask to human readable RTE_ETH_RSS_* form if possible */
+ if (sprint_nt_rss_mask(str_buffer, rss_buffer_len, "\n ",
+ data->hash_mask) == 0) {
+ fprintf(file, " Hash mask flags:%s\n", str_buffer);
+ }
+
+ break;
+ }
+
+ default: {
+ fprintf(file, " Unknown item. Type %u\n", idxs[i].type);
+ break;
+ }
+ }
+ }
+}
+
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ fprintf(file, "CFN status:\n");
+
+ for (uint32_t id = 0; id < db->nb_cat; ++id)
+ if (db->cfn[id].cfn_hw)
+ fprintf(file, " ID %d, HW id %d, priority 0x%" PRIx64 "\n", (int)id,
+ db->cfn[id].cfn_hw, db->cfn[id].priority);
+}
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 33de674b72..a9d31c86ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -276,6 +276,9 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file);
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
/**/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2afc7447d4..9727a28d45 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4300,6 +4300,86 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
return res;
}
+static void dump_flm_data(const uint32_t *data, FILE *file)
+{
+ for (unsigned int i = 0; i < 10; ++i) {
+ fprintf(file, "%s%02X %02X %02X %02X%s", i % 2 ? "" : " ",
+ (data[i] >> 24) & 0xff, (data[i] >> 16) & 0xff, (data[i] >> 8) & 0xff,
+ data[i] & 0xff, i % 2 ? "\n" : " ");
+ }
+}
+
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ if (flow != NULL) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLM) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+
+ } else {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs, flow->db_idx_counter,
+ file);
+ }
+
+ } else {
+ int max_flm_count = 1000;
+
+ hw_db_inline_dump_cfn(dev->ndev, dev->ndev->hw_db_handle, file);
+
+ flow = dev->ndev->flow_base;
+
+ while (flow) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs,
+ flow->db_idx_counter, file);
+ }
+
+ flow = flow->next;
+ }
+
+ flow = dev->ndev->flow_base_flm;
+
+ while (flow && max_flm_count >= 0) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+ max_flm_count -= 1;
+ }
+
+ flow = flow->next;
+ }
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
static const struct profile_inline_ops ops = {
/*
@@ -4308,6 +4388,7 @@ static const struct profile_inline_ops ops = {
.done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
.initialize_flow_management_of_ndev_profile_inline =
initialize_flow_management_of_ndev_profile_inline,
+ .flow_dev_dump_profile_inline = flow_dev_dump_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e623bb2352..2c76a2c023 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 20b5cb2835..67a24a00f1 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -582,9 +582,38 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: flow_filter module uninitialized", __func__);
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_dev_dump(internals->flw_dev,
+ is_flow_handle_typecast(flow) ? (void *)flow
+ : flow->flw_hdl,
+ caller_id, file, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .dev_dump = eth_flow_dev_dump,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 27d6cbef01..cef655c5e0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,12 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -284,6 +290,11 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ int (*flow_dev_dump)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
/*
* NT Flow API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 38/86] net/ntnic: add flow flush
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (36 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 37/86] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 39/86] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
` (48 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Implements flow flush API
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 13 ++++++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 4 ++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 38 ++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +++
5 files changed, 105 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 7f1e311988..34f2cad2cd 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -253,6 +253,18 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
+static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
+}
+
/*
* Device Management API
*/
@@ -1047,6 +1059,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9727a28d45..af07819a0c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3635,6 +3635,48 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ /*
+ * Delete all created FLM flows from this eth device.
+ * FLM flows must be deleted first because normal flows are their parents.
+ */
+ struct flow_handle *flow = dev->ndev->flow_base_flm;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ /* Delete all created flows from this eth device */
+ flow = dev->ndev->flow_base;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ return err;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -4395,6 +4437,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
* NT Flow FLM Meter API
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 2c76a2c023..c695842077 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,10 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 67a24a00f1..93d89d59f3 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -582,6 +582,43 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ int res = 0;
+ /* Main application caller_id is port_id shifted above VDPA ports */
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (internals->flw_dev) {
+ res = flow_filter_ops->flow_flush(internals->flw_dev, caller_id, &flow_error);
+ rte_spinlock_lock(&flow_lock);
+
+ for (int flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used && nt_flows[flow].caller_id == caller_id) {
+ /* Cleanup recorded flows */
+ nt_flows[flow].used = 0;
+ nt_flows[flow].caller_id = 0;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -613,6 +650,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index cef655c5e0..12baa13800 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,10 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_flush_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -309,6 +313,9 @@ struct flow_filter_ops {
int (*flow_destroy)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 39/86] net/ntnic: add GMF (Generic MAC Feeder) module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (37 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 38/86] net/ntnic: add flow flush Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 40/86] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
` (47 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The Generic MAC Feeder module provides a way to feed data
to the MAC modules directly from the FPGA,
rather than from host or physical ports.
The use case for this is as a test tool and is not used by NTNIC.
This module is requireqd for correct initialization
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 ++
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +++++++++
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 ++++++++++++++++++
5 files changed, 207 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
diff --git a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
index 8964458b47..d8e0cad7cd 100644
--- a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
+++ b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
@@ -404,6 +404,14 @@ static int _port_init(adapter_info_t *drv, nthw_fpga_t *fpga, int port)
_enable_tx(drv, mac_pcs);
_reset_rx(drv, mac_pcs);
+ /* 2.2) Nt4gaPort::setup() */
+ if (nthw_gmf_init(NULL, fpga, port) == 0) {
+ nthw_gmf_t gmf;
+
+ if (nthw_gmf_init(&gmf, fpga, port) == 0)
+ nthw_gmf_set_enable(&gmf, true);
+ }
+
/* Phase 3. Link state machine steps */
/* 3.1) Create NIM, ::createNim() */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d7e6d05556..92167d24e4 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -38,6 +38,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst9563.c',
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
+ 'nthw/core/nthw_gmf.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_core.h b/drivers/net/ntnic/nthw/core/include/nthw_core.h
index fe32891712..4073f9632c 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_core.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_core.h
@@ -17,6 +17,7 @@
#include "nthw_iic.h"
#include "nthw_i2cm.h"
+#include "nthw_gmf.h"
#include "nthw_gpio_phy.h"
#include "nthw_mac_pcs.h"
#include "nthw_sdc.h"
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_gmf.h b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
new file mode 100644
index 0000000000..cc5be85154
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_GMF_H__
+#define __NTHW_GMF_H__
+
+struct nthw_gmf {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_gmf;
+ int mn_instance;
+
+ nthw_register_t *mp_ctrl;
+ nthw_field_t *mp_ctrl_enable;
+ nthw_field_t *mp_ctrl_ifg_enable;
+ nthw_field_t *mp_ctrl_ifg_tx_now_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock;
+ nthw_field_t *mp_ctrl_ifg_auto_adjust_enable;
+ nthw_field_t *mp_ctrl_ts_inject_always;
+ nthw_field_t *mp_ctrl_fcs_always;
+
+ nthw_register_t *mp_speed;
+ nthw_field_t *mp_speed_ifg_speed;
+
+ nthw_register_t *mp_ifg_clock_delta;
+ nthw_field_t *mp_ifg_clock_delta_delta;
+
+ nthw_register_t *mp_ifg_clock_delta_adjust;
+ nthw_field_t *mp_ifg_clock_delta_adjust_delta;
+
+ nthw_register_t *mp_ifg_max_adjust_slack;
+ nthw_field_t *mp_ifg_max_adjust_slack_slack;
+
+ nthw_register_t *mp_debug_lane_marker;
+ nthw_field_t *mp_debug_lane_marker_compensation;
+
+ nthw_register_t *mp_stat_sticky;
+ nthw_field_t *mp_stat_sticky_data_underflowed;
+ nthw_field_t *mp_stat_sticky_ifg_adjusted;
+
+ nthw_register_t *mp_stat_next_pkt;
+ nthw_field_t *mp_stat_next_pkt_ns;
+
+ nthw_register_t *mp_stat_max_delayed_pkt;
+ nthw_field_t *mp_stat_max_delayed_pkt_ns;
+
+ nthw_register_t *mp_ts_inject;
+ nthw_field_t *mp_ts_inject_offset;
+ nthw_field_t *mp_ts_inject_pos;
+ int mn_param_gmf_ifg_speed_mul;
+ int mn_param_gmf_ifg_speed_div;
+
+ bool m_administrative_block; /* Used to enforce license expiry */
+};
+
+typedef struct nthw_gmf nthw_gmf_t;
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable);
+
+#endif /* __NTHW_GMF_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_gmf.c b/drivers/net/ntnic/nthw/core/nthw_gmf.c
new file mode 100644
index 0000000000..16a4c288bd
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_gmf.c
@@ -0,0 +1,133 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <limits.h>
+#include <math.h>
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_gmf.h"
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_GMF, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: GMF %d: no such instance",
+ p_fpga->p_fpga_info->mp_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_gmf = mod;
+
+ p->mp_ctrl = nthw_module_get_register(p->mp_mod_gmf, GMF_CTRL);
+ p->mp_ctrl_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_ENABLE);
+ p->mp_ctrl_ifg_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_ENABLE);
+ p->mp_ctrl_ifg_auto_adjust_enable =
+ nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_AUTO_ADJUST_ENABLE);
+ p->mp_ctrl_ts_inject_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_TS_INJECT_ALWAYS);
+ p->mp_ctrl_fcs_always = nthw_register_query_field(p->mp_ctrl, GMF_CTRL_FCS_ALWAYS);
+
+ p->mp_speed = nthw_module_get_register(p->mp_mod_gmf, GMF_SPEED);
+ p->mp_speed_ifg_speed = nthw_register_get_field(p->mp_speed, GMF_SPEED_IFG_SPEED);
+
+ p->mp_ifg_clock_delta = nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA);
+ p->mp_ifg_clock_delta_delta =
+ nthw_register_get_field(p->mp_ifg_clock_delta, GMF_IFG_SET_CLOCK_DELTA_DELTA);
+
+ p->mp_ifg_max_adjust_slack =
+ nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_MAX_ADJUST_SLACK);
+ p->mp_ifg_max_adjust_slack_slack = nthw_register_get_field(p->mp_ifg_max_adjust_slack,
+ GMF_IFG_MAX_ADJUST_SLACK_SLACK);
+
+ p->mp_debug_lane_marker = nthw_module_get_register(p->mp_mod_gmf, GMF_DEBUG_LANE_MARKER);
+ p->mp_debug_lane_marker_compensation =
+ nthw_register_get_field(p->mp_debug_lane_marker,
+ GMF_DEBUG_LANE_MARKER_COMPENSATION);
+
+ p->mp_stat_sticky = nthw_module_get_register(p->mp_mod_gmf, GMF_STAT_STICKY);
+ p->mp_stat_sticky_data_underflowed =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_DATA_UNDERFLOWED);
+ p->mp_stat_sticky_ifg_adjusted =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_IFG_ADJUSTED);
+
+ p->mn_param_gmf_ifg_speed_mul =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_MUL, 1);
+ p->mn_param_gmf_ifg_speed_div =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_DIV, 1);
+
+ p->m_administrative_block = false;
+
+ p->mp_stat_next_pkt = nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_NEXT_PKT);
+
+ if (p->mp_stat_next_pkt) {
+ p->mp_stat_next_pkt_ns =
+ nthw_register_query_field(p->mp_stat_next_pkt, GMF_STAT_NEXT_PKT_NS);
+
+ } else {
+ p->mp_stat_next_pkt_ns = NULL;
+ }
+
+ p->mp_stat_max_delayed_pkt =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_MAX_DELAYED_PKT);
+
+ if (p->mp_stat_max_delayed_pkt) {
+ p->mp_stat_max_delayed_pkt_ns =
+ nthw_register_query_field(p->mp_stat_max_delayed_pkt,
+ GMF_STAT_MAX_DELAYED_PKT_NS);
+
+ } else {
+ p->mp_stat_max_delayed_pkt_ns = NULL;
+ }
+
+ p->mp_ctrl_ifg_tx_now_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_NOW_ALWAYS);
+ p->mp_ctrl_ifg_tx_on_ts_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ALWAYS);
+
+ p->mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ADJUST_ON_SET_CLOCK);
+
+ p->mp_ifg_clock_delta_adjust =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA_ADJUST);
+
+ if (p->mp_ifg_clock_delta_adjust) {
+ p->mp_ifg_clock_delta_adjust_delta =
+ nthw_register_query_field(p->mp_ifg_clock_delta_adjust,
+ GMF_IFG_SET_CLOCK_DELTA_ADJUST_DELTA);
+
+ } else {
+ p->mp_ifg_clock_delta_adjust_delta = NULL;
+ }
+
+ p->mp_ts_inject = nthw_module_query_register(p->mp_mod_gmf, GMF_TS_INJECT);
+
+ if (p->mp_ts_inject) {
+ p->mp_ts_inject_offset =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_OFFSET);
+ p->mp_ts_inject_pos =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_POS);
+
+ } else {
+ p->mp_ts_inject_offset = NULL;
+ p->mp_ts_inject_pos = NULL;
+ }
+
+ return 0;
+}
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable)
+{
+ if (!p->m_administrative_block)
+ nthw_field_set_val_flush32(p->mp_ctrl_enable, enable ? 1 : 0);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 40/86] net/ntnic: sort FPGA registers alphanumerically
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (38 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 39/86] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 41/86] net/ntnic: add CSU module registers Serhii Iliushyk
` (46 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Beatification commit. It is required for pretty supporting different FPGA
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 364 +++++++++---------
1 file changed, 182 insertions(+), 182 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 6df7208649..e076697a92 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,187 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
+ { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
+ { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
+ { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
+ { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
+ { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
+ { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
+ { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
+ { DBS_RX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
+ { DBS_RX_INIT_BUSY, 1, 8, 0 },
+ { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
+ { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
+ { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
+ { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
+ { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
+ { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
+ { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
+ { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
+ { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
+ { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
+ { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
+ { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
+ { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
+ { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
+ { DBS_TX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
+ { DBS_TX_INIT_BUSY, 1, 8, 0 },
+ { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
+ { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
+ { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
+ { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
+ { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
+ { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
+ { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
+ { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
+ { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
+ { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
+ { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
+ { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
+ { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
+ { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
+ { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_register_init_s dbs_registers[] = {
+ { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
+ { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
+ { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
+ { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
+ { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
+ { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
+ { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
+ { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
+ { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
+ { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
+ { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
+ { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
+ { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
+ { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
+ { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
+ { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
+ { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
+ { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
+ { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
+ { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
+ { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
+ { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
+ { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
+ { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
+ { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
+ { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
+ { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1541,192 +1722,11 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
-static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
- { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
- { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
- { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
- { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
- { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
- { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
- { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
- { DBS_RX_IDLE_BUSY, 1, 8, 0 },
- { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
- { DBS_RX_INIT_BUSY, 1, 8, 0 },
- { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
- { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
- { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
- { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
- { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
- { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
- { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
- { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
- { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
- { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
- { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
- { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
- { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
- { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
- { DBS_TX_IDLE_BUSY, 1, 8, 0 },
- { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
- { DBS_TX_INIT_BUSY, 1, 8, 0 },
- { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
- { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
- { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
- { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
- { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
- { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
- { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
- { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
- { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
- { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
- { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
- { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
- { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
- { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
- { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_register_init_s dbs_registers[] = {
- { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
- { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
- { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
- { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
- { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
- { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
- { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
- { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
- { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
- { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
- { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
- { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
- { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
- { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
- { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
- { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
- { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
- { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
- { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
- { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
- { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
- { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
- { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
- { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
- { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
- { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
- { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
-};
-
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
- { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers},
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
{
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 41/86] net/ntnic: add CSU module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (39 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 40/86] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 42/86] net/ntnic: add FLM " Serhii Iliushyk
` (45 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Checksum Update module updates the checksums of packets
that has been modified in any way.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index e076697a92..efa7b306bc 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,23 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
+ { CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s csu_rcp_data_fields[] = {
+ { CSU_RCP_DATA_IL3_CMD, 2, 5, 0x0000 },
+ { CSU_RCP_DATA_IL4_CMD, 3, 7, 0x0000 },
+ { CSU_RCP_DATA_OL3_CMD, 2, 0, 0x0000 },
+ { CSU_RCP_DATA_OL4_CMD, 3, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s csu_registers[] = {
+ { CSU_RCP_CTRL, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, csu_rcp_ctrl_fields },
+ { CSU_RCP_DATA, 2, 10, NTHW_FPGA_REG_TYPE_WO, 0, 4, csu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
{ DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
{ DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
@@ -1724,6 +1741,7 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
@@ -1919,5 +1937,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 22, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 42/86] net/ntnic: add FLM module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (40 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 41/86] net/ntnic: add CSU module registers Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 43/86] net/ntnic: add HFU " Serhii Iliushyk
` (44 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup and
programming engine which supported exact match lookup in line-rate
of up to hundreds of millions of flows.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 286 +++++++++++++++++-
1 file changed, 284 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efa7b306bc..739cabfb1c 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -468,6 +468,288 @@ static nthw_fpga_register_init_s dbs_registers[] = {
{ DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
};
+static nthw_fpga_field_init_s flm_buf_ctrl_fields[] = {
+ { FLM_BUF_CTRL_INF_AVAIL, 16, 16, 0x0000 },
+ { FLM_BUF_CTRL_LRN_FREE, 16, 0, 0x0000 },
+ { FLM_BUF_CTRL_STA_AVAIL, 16, 32, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_control_fields[] = {
+ { FLM_CONTROL_CALIB_RECALIBRATE, 3, 28, 0 },
+ { FLM_CONTROL_CRCRD, 1, 12, 0x0000 },
+ { FLM_CONTROL_CRCWR, 1, 11, 0x0000 },
+ { FLM_CONTROL_EAB, 5, 18, 0 },
+ { FLM_CONTROL_ENABLE, 1, 0, 0 },
+ { FLM_CONTROL_INIT, 1, 1, 0x0000 },
+ { FLM_CONTROL_LDS, 1, 2, 0x0000 },
+ { FLM_CONTROL_LFS, 1, 3, 0x0000 },
+ { FLM_CONTROL_LIS, 1, 4, 0x0000 },
+ { FLM_CONTROL_PDS, 1, 9, 0x0000 },
+ { FLM_CONTROL_PIS, 1, 10, 0x0000 },
+ { FLM_CONTROL_RBL, 4, 13, 0 },
+ { FLM_CONTROL_RDS, 1, 7, 0x0000 },
+ { FLM_CONTROL_RIS, 1, 8, 0x0000 },
+ { FLM_CONTROL_SPLIT_SDRAM_USAGE, 5, 23, 16 },
+ { FLM_CONTROL_UDS, 1, 5, 0x0000 },
+ { FLM_CONTROL_UIS, 1, 6, 0x0000 },
+ { FLM_CONTROL_WPD, 1, 17, 0 },
+};
+
+static nthw_fpga_field_init_s flm_inf_data_fields[] = {
+ { FLM_INF_DATA_BYTES, 64, 0, 0x0000 }, { FLM_INF_DATA_CAUSE, 3, 224, 0x0000 },
+ { FLM_INF_DATA_EOR, 1, 287, 0x0000 }, { FLM_INF_DATA_ID, 32, 192, 0x0000 },
+ { FLM_INF_DATA_PACKETS, 64, 64, 0x0000 }, { FLM_INF_DATA_TS, 64, 128, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_aps_fields[] = {
+ { FLM_LOAD_APS_APS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_bin_fields[] = {
+ { FLM_LOAD_BIN_BIN, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_lps_fields[] = {
+ { FLM_LOAD_LPS_LPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
+ { FLM_LRN_DATA_ADJ, 32, 480, 0x0000 }, { FLM_LRN_DATA_COLOR, 32, 448, 0x0000 },
+ { FLM_LRN_DATA_DSCP, 6, 698, 0x0000 }, { FLM_LRN_DATA_ENT, 1, 693, 0x0000 },
+ { FLM_LRN_DATA_EOR, 1, 767, 0x0000 }, { FLM_LRN_DATA_FILL, 16, 544, 0x0000 },
+ { FLM_LRN_DATA_FT, 4, 560, 0x0000 }, { FLM_LRN_DATA_FT_MBR, 4, 564, 0x0000 },
+ { FLM_LRN_DATA_FT_MISS, 4, 568, 0x0000 }, { FLM_LRN_DATA_ID, 32, 512, 0x0000 },
+ { FLM_LRN_DATA_KID, 8, 328, 0x0000 }, { FLM_LRN_DATA_MBR_ID1, 28, 572, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID2, 28, 600, 0x0000 }, { FLM_LRN_DATA_MBR_ID3, 28, 628, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID4, 28, 656, 0x0000 }, { FLM_LRN_DATA_NAT_EN, 1, 711, 0x0000 },
+ { FLM_LRN_DATA_NAT_IP, 32, 336, 0x0000 }, { FLM_LRN_DATA_NAT_PORT, 16, 400, 0x0000 },
+ { FLM_LRN_DATA_NOFI, 1, 716, 0x0000 }, { FLM_LRN_DATA_OP, 4, 694, 0x0000 },
+ { FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
+ { FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
+ { FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
+ { FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
+ { FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_prio_fields[] = {
+ { FLM_PRIO_FT0, 4, 4, 1 }, { FLM_PRIO_FT1, 4, 12, 1 }, { FLM_PRIO_FT2, 4, 20, 1 },
+ { FLM_PRIO_FT3, 4, 28, 1 }, { FLM_PRIO_LIMIT0, 4, 0, 0 }, { FLM_PRIO_LIMIT1, 4, 8, 0 },
+ { FLM_PRIO_LIMIT2, 4, 16, 0 }, { FLM_PRIO_LIMIT3, 4, 24, 0 },
+};
+
+static nthw_fpga_field_init_s flm_pst_ctrl_fields[] = {
+ { FLM_PST_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_PST_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_pst_data_fields[] = {
+ { FLM_PST_DATA_BP, 5, 0, 0x0000 },
+ { FLM_PST_DATA_PP, 5, 5, 0x0000 },
+ { FLM_PST_DATA_TP, 5, 10, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_ctrl_fields[] = {
+ { FLM_RCP_CTRL_ADR, 5, 0, 0x0000 },
+ { FLM_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_data_fields[] = {
+ { FLM_RCP_DATA_AUTO_IPV4_MASK, 1, 402, 0x0000 },
+ { FLM_RCP_DATA_BYT_DYN, 5, 387, 0x0000 },
+ { FLM_RCP_DATA_BYT_OFS, 8, 392, 0x0000 },
+ { FLM_RCP_DATA_IPN, 1, 386, 0x0000 },
+ { FLM_RCP_DATA_KID, 8, 377, 0x0000 },
+ { FLM_RCP_DATA_LOOKUP, 1, 0, 0x0000 },
+ { FLM_RCP_DATA_MASK, 320, 57, 0x0000 },
+ { FLM_RCP_DATA_OPN, 1, 385, 0x0000 },
+ { FLM_RCP_DATA_QW0_DYN, 5, 1, 0x0000 },
+ { FLM_RCP_DATA_QW0_OFS, 8, 6, 0x0000 },
+ { FLM_RCP_DATA_QW0_SEL, 2, 14, 0x0000 },
+ { FLM_RCP_DATA_QW4_DYN, 5, 16, 0x0000 },
+ { FLM_RCP_DATA_QW4_OFS, 8, 21, 0x0000 },
+ { FLM_RCP_DATA_SW8_DYN, 5, 29, 0x0000 },
+ { FLM_RCP_DATA_SW8_OFS, 8, 34, 0x0000 },
+ { FLM_RCP_DATA_SW8_SEL, 2, 42, 0x0000 },
+ { FLM_RCP_DATA_SW9_DYN, 5, 44, 0x0000 },
+ { FLM_RCP_DATA_SW9_OFS, 8, 49, 0x0000 },
+ { FLM_RCP_DATA_TXPLM, 2, 400, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scan_fields[] = {
+ { FLM_SCAN_I, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s flm_status_fields[] = {
+ { FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
+ { FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
+ { FLM_STATUS_CALIB_SUCCESS, 3, 0, 0 },
+ { FLM_STATUS_CRCERR, 1, 10, 0x0000 },
+ { FLM_STATUS_CRITICAL, 1, 8, 0x0000 },
+ { FLM_STATUS_EFT_BP, 1, 11, 0x0000 },
+ { FLM_STATUS_IDLE, 1, 7, 0x0000 },
+ { FLM_STATUS_INITDONE, 1, 6, 0x0000 },
+ { FLM_STATUS_PANIC, 1, 9, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_done_fields[] = {
+ { FLM_STAT_AUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_fail_fields[] = {
+ { FLM_STAT_AUL_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_ignore_fields[] = {
+ { FLM_STAT_AUL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_hit_fields[] = {
+ { FLM_STAT_CSH_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_miss_fields[] = {
+ { FLM_STAT_CSH_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_unh_fields[] = {
+ { FLM_STAT_CSH_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_move_fields[] = {
+ { FLM_STAT_CUC_MOVE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_start_fields[] = {
+ { FLM_STAT_CUC_START_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_flows_fields[] = {
+ { FLM_STAT_FLOWS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_done_fields[] = {
+ { FLM_STAT_INF_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_skip_fields[] = {
+ { FLM_STAT_INF_SKIP_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_done_fields[] = {
+ { FLM_STAT_LRN_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_fail_fields[] = {
+ { FLM_STAT_LRN_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_ignore_fields[] = {
+ { FLM_STAT_LRN_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_dis_fields[] = {
+ { FLM_STAT_PCK_DIS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_hit_fields[] = {
+ { FLM_STAT_PCK_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_miss_fields[] = {
+ { FLM_STAT_PCK_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_unh_fields[] = {
+ { FLM_STAT_PCK_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_done_fields[] = {
+ { FLM_STAT_PRB_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_ignore_fields[] = {
+ { FLM_STAT_PRB_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_done_fields[] = {
+ { FLM_STAT_REL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_ignore_fields[] = {
+ { FLM_STAT_REL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_sta_done_fields[] = {
+ { FLM_STAT_STA_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_tul_done_fields[] = {
+ { FLM_STAT_TUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_done_fields[] = {
+ { FLM_STAT_UNL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_ignore_fields[] = {
+ { FLM_STAT_UNL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_sta_data_fields[] = {
+ { FLM_STA_DATA_EOR, 1, 95, 0x0000 }, { FLM_STA_DATA_ID, 32, 0, 0x0000 },
+ { FLM_STA_DATA_LDS, 1, 32, 0x0000 }, { FLM_STA_DATA_LFS, 1, 33, 0x0000 },
+ { FLM_STA_DATA_LIS, 1, 34, 0x0000 }, { FLM_STA_DATA_PDS, 1, 39, 0x0000 },
+ { FLM_STA_DATA_PIS, 1, 40, 0x0000 }, { FLM_STA_DATA_RDS, 1, 37, 0x0000 },
+ { FLM_STA_DATA_RIS, 1, 38, 0x0000 }, { FLM_STA_DATA_UDS, 1, 35, 0x0000 },
+ { FLM_STA_DATA_UIS, 1, 36, 0x0000 },
+};
+
+static nthw_fpga_register_init_s flm_registers[] = {
+ { FLM_BUF_CTRL, 14, 48, NTHW_FPGA_REG_TYPE_RW, 0, 3, flm_buf_ctrl_fields },
+ { FLM_CONTROL, 0, 31, NTHW_FPGA_REG_TYPE_MIXED, 134217728, 18, flm_control_fields },
+ { FLM_INF_DATA, 16, 288, NTHW_FPGA_REG_TYPE_RO, 0, 6, flm_inf_data_fields },
+ { FLM_LOAD_APS, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_aps_fields },
+ { FLM_LOAD_BIN, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_load_bin_fields },
+ { FLM_LOAD_LPS, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_lps_fields },
+ { FLM_LRN_DATA, 15, 768, NTHW_FPGA_REG_TYPE_WO, 0, 34, flm_lrn_data_fields },
+ { FLM_PRIO, 6, 32, NTHW_FPGA_REG_TYPE_WO, 269488144, 8, flm_prio_fields },
+ { FLM_PST_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_pst_ctrl_fields },
+ { FLM_PST_DATA, 13, 15, NTHW_FPGA_REG_TYPE_WO, 0, 3, flm_pst_data_fields },
+ { FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
+ { FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
+ { FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
+ { FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
+ { FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
+ { FLM_STAT_AUL_IGNORE, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_ignore_fields },
+ { FLM_STAT_CSH_HIT, 52, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_hit_fields },
+ { FLM_STAT_CSH_MISS, 53, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_miss_fields },
+ { FLM_STAT_CSH_UNH, 54, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_unh_fields },
+ { FLM_STAT_CUC_MOVE, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_move_fields },
+ { FLM_STAT_CUC_START, 55, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_start_fields },
+ { FLM_STAT_FLOWS, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_flows_fields },
+ { FLM_STAT_INF_DONE, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_done_fields },
+ { FLM_STAT_INF_SKIP, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_skip_fields },
+ { FLM_STAT_LRN_DONE, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_done_fields },
+ { FLM_STAT_LRN_FAIL, 34, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_fail_fields },
+ { FLM_STAT_LRN_IGNORE, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_ignore_fields },
+ { FLM_STAT_PCK_DIS, 51, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_dis_fields },
+ { FLM_STAT_PCK_HIT, 48, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_hit_fields },
+ { FLM_STAT_PCK_MISS, 49, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_miss_fields },
+ { FLM_STAT_PCK_UNH, 50, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_unh_fields },
+ { FLM_STAT_PRB_DONE, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_done_fields },
+ { FLM_STAT_PRB_IGNORE, 40, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_ignore_fields },
+ { FLM_STAT_REL_DONE, 37, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_done_fields },
+ { FLM_STAT_REL_IGNORE, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_ignore_fields },
+ { FLM_STAT_STA_DONE, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_sta_done_fields },
+ { FLM_STAT_TUL_DONE, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_tul_done_fields },
+ { FLM_STAT_UNL_DONE, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_done_fields },
+ { FLM_STAT_UNL_IGNORE, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_ignore_fields },
+ { FLM_STA_DATA, 17, 96, NTHW_FPGA_REG_TYPE_RO, 0, 11, flm_sta_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1743,6 +2025,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
+ { MOD_FLM, 0, MOD_FLM, 0, 25, NTHW_FPGA_BUS_TYPE_RAB1, 1280, 43, flm_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
@@ -1817,7 +2100,6 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
- { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
@@ -1937,5 +2219,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 43/86] net/ntnic: add HFU module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (41 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 42/86] net/ntnic: add FLM " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 44/86] net/ntnic: add IFR " Serhii Iliushyk
` (43 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Header Field Update module updates protocol fields
if the packets have been changed,
for example length fields and next protocol fields.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 739cabfb1c..82068746b3 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -919,6 +919,41 @@ static nthw_fpga_register_init_s gpio_phy_registers[] = {
{ GPIO_PHY_GPIO, 1, 10, NTHW_FPGA_REG_TYPE_RW, 17, 10, gpio_phy_gpio_fields },
};
+static nthw_fpga_field_init_s hfu_rcp_ctrl_fields[] = {
+ { HFU_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { HFU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s hfu_rcp_data_fields[] = {
+ { HFU_RCP_DATA_LEN_A_ADD_DYN, 5, 15, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_ADD_OFS, 8, 20, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_OL4LEN, 1, 1, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_DYN, 5, 2, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_OFS, 8, 7, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_SUB_DYN, 5, 28, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_WR, 1, 0, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_DYN, 5, 47, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_OFS, 8, 52, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_DYN, 5, 34, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_OFS, 8, 39, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_SUB_DYN, 5, 60, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_WR, 1, 33, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_DYN, 5, 79, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_OFS, 8, 84, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_DYN, 5, 66, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_OFS, 8, 71, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_SUB_DYN, 5, 92, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_WR, 1, 65, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_DYN, 5, 98, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_OFS, 8, 103, 0x0000 },
+ { HFU_RCP_DATA_TTL_WR, 1, 97, 0x0000 },
+};
+
+static nthw_fpga_register_init_s hfu_registers[] = {
+ { HFU_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, hfu_rcp_ctrl_fields },
+ { HFU_RCP_DATA, 1, 111, NTHW_FPGA_REG_TYPE_WO, 0, 22, hfu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s hif_build_time_fields[] = {
{ HIF_BUILD_TIME_TIME, 32, 0, 1726740521 },
};
@@ -2033,6 +2068,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
gpio_phy_registers
},
+ { MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
@@ -2219,5 +2255,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 44/86] net/ntnic: add IFR module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (42 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 43/86] net/ntnic: add HFU " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 45/86] net/ntnic: add MAC Rx " Serhii Iliushyk
` (42 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 82068746b3..509e1f6860 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1095,6 +1095,44 @@ static nthw_fpga_register_init_s hsh_registers[] = {
{ HSH_RCP_DATA, 1, 743, NTHW_FPGA_REG_TYPE_WO, 0, 23, hsh_rcp_data_fields },
};
+static nthw_fpga_field_init_s ifr_counters_ctrl_fields[] = {
+ { IFR_COUNTERS_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_COUNTERS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_counters_data_fields[] = {
+ { IFR_COUNTERS_DATA_DROP, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_ctrl_fields[] = {
+ { IFR_DF_BUF_CTRL_AVAILABLE, 11, 0, 0x0000 },
+ { IFR_DF_BUF_CTRL_MTU_PROFILE, 16, 11, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_data_fields[] = {
+ { IFR_DF_BUF_DATA_FIFO_DAT, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_ctrl_fields[] = {
+ { IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_data_fields[] = {
+ { IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 }, { IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 }, { IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ifr_registers[] = {
+ { IFR_COUNTERS_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_counters_ctrl_fields },
+ { IFR_COUNTERS_DATA, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_counters_data_fields },
+ { IFR_DF_BUF_CTRL, 2, 27, NTHW_FPGA_REG_TYPE_RO, 0, 2, ifr_df_buf_ctrl_fields },
+ { IFR_DF_BUF_DATA, 3, 128, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_df_buf_data_fields },
+ { IFR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_rcp_ctrl_fields },
+ { IFR_RCP_DATA, 1, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, ifr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s iic_adr_fields[] = {
{ IIC_ADR_SLV_ADR, 7, 1, 0 },
};
@@ -2071,6 +2109,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
+ { MOD_IFR, 0, MOD_IFR, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 9984, 6, ifr_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
{ MOD_IIC, 1, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 896, 22, iic_registers },
{ MOD_IIC, 2, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 24832, 22, iic_registers },
@@ -2255,5 +2294,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 45/86] net/ntnic: add MAC Rx module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (43 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 44/86] net/ntnic: add IFR " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 46/86] net/ntnic: add MAC Tx " Serhii Iliushyk
` (41 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 61 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +++++++++
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 509e1f6860..eecd6342c0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1774,6 +1774,63 @@ static nthw_fpga_register_init_s mac_pcs_registers[] = {
},
};
+static nthw_fpga_field_init_s mac_rx_bad_fcs_fields[] = {
+ { MAC_RX_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_fragment_fields[] = {
+ { MAC_RX_FRAGMENT_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_bad_fcs_fields[] = {
+ { MAC_RX_PACKET_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_small_fields[] = {
+ { MAC_RX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_bytes_fields[] = {
+ { MAC_RX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_bytes_fields[] = {
+ { MAC_RX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_packets_fields[] = {
+ { MAC_RX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_packets_fields[] = {
+ { MAC_RX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_undersize_fields[] = {
+ { MAC_RX_UNDERSIZE_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_rx_registers[] = {
+ { MAC_RX_BAD_FCS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_bad_fcs_fields },
+ { MAC_RX_FRAGMENT, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_fragment_fields },
+ {
+ MAC_RX_PACKET_BAD_FCS, 7, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_packet_bad_fcs_fields
+ },
+ { MAC_RX_PACKET_SMALL, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_packet_small_fields },
+ { MAC_RX_TOTAL_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_bytes_fields },
+ {
+ MAC_RX_TOTAL_GOOD_BYTES, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_bytes_fields
+ },
+ {
+ MAC_RX_TOTAL_GOOD_PACKETS, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_packets_fields
+ },
+ { MAC_RX_TOTAL_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_packets_fields },
+ { MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2123,6 +2180,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_MAC_PCS, 1, MOD_MAC_PCS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB2, 11776, 44,
mac_pcs_registers
},
+ { MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
+ { MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2294,5 +2353,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index b6be02f45e..5983ba7095 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -29,6 +29,7 @@
#define MOD_IIC (0x7629cddbUL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
+#define MOD_MAC_RX (0x6347b490UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -43,7 +44,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (14)
+#define MOD_IDX_COUNT (31)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 3560eeda7d..5ebbec6c7e 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -30,6 +30,7 @@
#include "nthw_fpga_reg_defs_ins.h"
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
+#include "nthw_fpga_reg_defs_mac_rx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
new file mode 100644
index 0000000000..3829c10f3b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_RX_
+#define _NTHW_FPGA_REG_DEFS_MAC_RX_
+
+/* MAC_RX */
+#define MAC_RX_BAD_FCS (0xca07f618UL)
+#define MAC_RX_BAD_FCS_COUNT (0x11d5ba0eUL)
+#define MAC_RX_FRAGMENT (0x5363b736UL)
+#define MAC_RX_FRAGMENT_COUNT (0xf664c9aUL)
+#define MAC_RX_PACKET_BAD_FCS (0x4cb8b34cUL)
+#define MAC_RX_PACKET_BAD_FCS_COUNT (0xb6701e28UL)
+#define MAC_RX_PACKET_SMALL (0xed318a65UL)
+#define MAC_RX_PACKET_SMALL_COUNT (0x72095ec7UL)
+#define MAC_RX_TOTAL_BYTES (0x831313e2UL)
+#define MAC_RX_TOTAL_BYTES_COUNT (0xe5d8be59UL)
+#define MAC_RX_TOTAL_GOOD_BYTES (0x912c2d1cUL)
+#define MAC_RX_TOTAL_GOOD_BYTES_COUNT (0x63bb5f3eUL)
+#define MAC_RX_TOTAL_GOOD_PACKETS (0xfbb4f497UL)
+#define MAC_RX_TOTAL_GOOD_PACKETS_COUNT (0xae9d21b0UL)
+#define MAC_RX_TOTAL_PACKETS (0xb0ea3730UL)
+#define MAC_RX_TOTAL_PACKETS_COUNT (0x532c885dUL)
+#define MAC_RX_UNDERSIZE (0xb6fa4bdbUL)
+#define MAC_RX_UNDERSIZE_COUNT (0x471945ffUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_RX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 46/86] net/ntnic: add MAC Tx module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (44 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 45/86] net/ntnic: add MAC Rx " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 47/86] net/ntnic: add RPP LR " Serhii Iliushyk
` (40 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Media Access Control Transmit module contains counters
that keep track on transmitted packets.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 ++++++++++
4 files changed, 61 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index eecd6342c0..7a2f5aec32 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1831,6 +1831,40 @@ static nthw_fpga_register_init_s mac_rx_registers[] = {
{ MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
};
+static nthw_fpga_field_init_s mac_tx_packet_small_fields[] = {
+ { MAC_TX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_bytes_fields[] = {
+ { MAC_TX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_bytes_fields[] = {
+ { MAC_TX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_packets_fields[] = {
+ { MAC_TX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_packets_fields[] = {
+ { MAC_TX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_tx_registers[] = {
+ { MAC_TX_PACKET_SMALL, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_packet_small_fields },
+ { MAC_TX_TOTAL_BYTES, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_bytes_fields },
+ {
+ MAC_TX_TOTAL_GOOD_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_bytes_fields
+ },
+ {
+ MAC_TX_TOTAL_GOOD_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_packets_fields
+ },
+ { MAC_TX_TOTAL_PACKETS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_packets_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2182,6 +2216,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
},
{ MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
{ MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
+ { MOD_MAC_TX, 0, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 11264, 5, mac_tx_registers },
+ { MOD_MAC_TX, 1, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12800, 5, mac_tx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2353,5 +2389,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 5983ba7095..f4a913f3d2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -30,6 +30,7 @@
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
+#define MOD_MAC_TX (0x351d1316UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -44,7 +45,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (31)
+#define MOD_IDX_COUNT (32)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 5ebbec6c7e..7741aa563f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -31,6 +31,7 @@
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
#include "nthw_fpga_reg_defs_mac_rx.h"
+#include "nthw_fpga_reg_defs_mac_tx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
new file mode 100644
index 0000000000..6a77d449ae
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_TX_
+#define _NTHW_FPGA_REG_DEFS_MAC_TX_
+
+/* MAC_TX */
+#define MAC_TX_PACKET_SMALL (0xcfcb5e97UL)
+#define MAC_TX_PACKET_SMALL_COUNT (0x84345b01UL)
+#define MAC_TX_TOTAL_BYTES (0x7bd15854UL)
+#define MAC_TX_TOTAL_BYTES_COUNT (0x61fb238cUL)
+#define MAC_TX_TOTAL_GOOD_BYTES (0xcf0260fUL)
+#define MAC_TX_TOTAL_GOOD_BYTES_COUNT (0x8603398UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS (0xd89f151UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS_COUNT (0x12c47c77UL)
+#define MAC_TX_TOTAL_PACKETS (0xe37b5ed4UL)
+#define MAC_TX_TOTAL_PACKETS_COUNT (0x21ddd2ddUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_TX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 47/86] net/ntnic: add RPP LR module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (45 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 46/86] net/ntnic: add MAC Tx " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 48/86] net/ntnic: add SLC " Serhii Iliushyk
` (39 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The RX Packet Process for Local Retransmit module can add bytes
in the FPGA TX pipeline, which is needed when the packet increases in size.
Note, this makes room for packet expansion,
but the actual expansion is done by the modules.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 32 ++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 7a2f5aec32..33437da204 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2138,6 +2138,35 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
+ { RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_data_fields[] = {
+ { RPP_LR_IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_ctrl_fields[] = {
+ { RPP_LR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_data_fields[] = {
+ { RPP_LR_RCP_DATA_EXP, 14, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpp_lr_registers[] = {
+ { RPP_LR_IFR_RCP_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_ifr_rcp_ctrl_fields },
+ { RPP_LR_IFR_RCP_DATA, 3, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, rpp_lr_ifr_rcp_data_fields },
+ { RPP_LR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_rcp_ctrl_fields },
+ { RPP_LR_RCP_DATA, 1, 14, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpp_lr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s rst9563_ctrl_fields[] = {
{ RST9563_CTRL_PTP_MMCM_CLKSEL, 1, 2, 1 },
{ RST9563_CTRL_TS_CLKSEL, 1, 1, 1 },
@@ -2230,6 +2259,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_QSL, 0, MOD_QSL, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 1792, 8, qsl_registers },
{ MOD_RAC, 0, MOD_RAC, 3, 0, NTHW_FPGA_BUS_TYPE_PCI, 8192, 14, rac_registers },
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
+ { MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
};
@@ -2389,5 +2419,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 48/86] net/ntnic: add SLC LR module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (46 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 47/86] net/ntnic: add RPP LR " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 49/86] net/ntnic: add Tx CPY " Serhii Iliushyk
` (38 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 33437da204..0f69f89527 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2214,6 +2214,23 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
+static nthw_fpga_field_init_s slc_rcp_ctrl_fields[] = {
+ { SLC_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { SLC_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s slc_rcp_data_fields[] = {
+ { SLC_RCP_DATA_HEAD_DYN, 5, 1, 0x0000 }, { SLC_RCP_DATA_HEAD_OFS, 8, 6, 0x0000 },
+ { SLC_RCP_DATA_HEAD_SLC_EN, 1, 0, 0x0000 }, { SLC_RCP_DATA_PCAP, 1, 35, 0x0000 },
+ { SLC_RCP_DATA_TAIL_DYN, 5, 15, 0x0000 }, { SLC_RCP_DATA_TAIL_OFS, 15, 20, 0x0000 },
+ { SLC_RCP_DATA_TAIL_SLC_EN, 1, 14, 0x0000 },
+};
+
+static nthw_fpga_register_init_s slc_registers[] = {
+ { SLC_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, slc_rcp_ctrl_fields },
+ { SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2261,6 +2278,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
+ { MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2419,5 +2437,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index f4a913f3d2..865dd6a084 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,11 +41,12 @@
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
+#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (32)
+#define MOD_IDX_COUNT (33)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 49/86] net/ntnic: add Tx CPY module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (47 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 48/86] net/ntnic: add SLC " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 50/86] net/ntnic: add Tx INS " Serhii Iliushyk
` (37 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Copy module writes data to packet fields based on the lookup
performed by the FLM module.
This is used for NAT and can support other actions based
on the RTE action MODIFY_FIELD.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 204 +++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 205 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 0f69f89527..60fd748ea2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,207 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s cpy_packet_reader0_ctrl_fields[] = {
+ { CPY_PACKET_READER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_PACKET_READER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_packet_reader0_data_fields[] = {
+ { CPY_PACKET_READER0_DATA_DYN, 5, 10, 0x0000 },
+ { CPY_PACKET_READER0_DATA_OFS, 10, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_ctrl_fields[] = {
+ { CPY_WRITER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_data_fields[] = {
+ { CPY_WRITER0_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER0_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER0_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER0_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER0_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_ctrl_fields[] = {
+ { CPY_WRITER0_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_data_fields[] = {
+ { CPY_WRITER0_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_ctrl_fields[] = {
+ { CPY_WRITER1_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_data_fields[] = {
+ { CPY_WRITER1_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER1_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER1_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER1_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER1_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_ctrl_fields[] = {
+ { CPY_WRITER1_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_data_fields[] = {
+ { CPY_WRITER1_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_ctrl_fields[] = {
+ { CPY_WRITER2_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_data_fields[] = {
+ { CPY_WRITER2_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER2_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER2_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER2_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER2_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_ctrl_fields[] = {
+ { CPY_WRITER2_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_data_fields[] = {
+ { CPY_WRITER2_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_ctrl_fields[] = {
+ { CPY_WRITER3_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_data_fields[] = {
+ { CPY_WRITER3_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER3_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER3_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER3_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER3_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_ctrl_fields[] = {
+ { CPY_WRITER3_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_data_fields[] = {
+ { CPY_WRITER3_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_ctrl_fields[] = {
+ { CPY_WRITER4_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_data_fields[] = {
+ { CPY_WRITER4_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER4_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER4_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER4_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER4_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_ctrl_fields[] = {
+ { CPY_WRITER4_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_data_fields[] = {
+ { CPY_WRITER4_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_ctrl_fields[] = {
+ { CPY_WRITER5_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_data_fields[] = {
+ { CPY_WRITER5_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER5_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER5_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER5_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER5_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_ctrl_fields[] = {
+ { CPY_WRITER5_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_data_fields[] = {
+ { CPY_WRITER5_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s cpy_registers[] = {
+ {
+ CPY_PACKET_READER0_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_ctrl_fields
+ },
+ {
+ CPY_PACKET_READER0_DATA, 25, 15, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_data_fields
+ },
+ { CPY_WRITER0_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer0_ctrl_fields },
+ { CPY_WRITER0_DATA, 1, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer0_data_fields },
+ {
+ CPY_WRITER0_MASK_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer0_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER0_MASK_DATA, 3, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer0_mask_data_fields
+ },
+ { CPY_WRITER1_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer1_ctrl_fields },
+ { CPY_WRITER1_DATA, 5, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer1_data_fields },
+ {
+ CPY_WRITER1_MASK_CTRL, 6, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer1_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER1_MASK_DATA, 7, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer1_mask_data_fields
+ },
+ { CPY_WRITER2_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer2_ctrl_fields },
+ { CPY_WRITER2_DATA, 9, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer2_data_fields },
+ {
+ CPY_WRITER2_MASK_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer2_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER2_MASK_DATA, 11, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer2_mask_data_fields
+ },
+ { CPY_WRITER3_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer3_ctrl_fields },
+ { CPY_WRITER3_DATA, 13, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer3_data_fields },
+ {
+ CPY_WRITER3_MASK_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer3_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER3_MASK_DATA, 15, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer3_mask_data_fields
+ },
+ { CPY_WRITER4_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer4_ctrl_fields },
+ { CPY_WRITER4_DATA, 17, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer4_data_fields },
+ {
+ CPY_WRITER4_MASK_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer4_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER4_MASK_DATA, 19, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer4_mask_data_fields
+ },
+ { CPY_WRITER5_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer5_ctrl_fields },
+ { CPY_WRITER5_DATA, 21, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer5_data_fields },
+ {
+ CPY_WRITER5_MASK_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer5_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER5_MASK_DATA, 23, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer5_mask_data_fields
+ },
+};
+
static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
{ CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2279,6 +2480,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
+ { MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2437,5 +2639,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 865dd6a084..0ab5ae0310 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -15,6 +15,7 @@
#define MOD_UNKNOWN (0L)/* Unknown/uninitialized - keep this as the first element */
#define MOD_CAT (0x30b447c2UL)
+#define MOD_CPY (0x1ddc186fUL)
#define MOD_CSU (0x3f470787UL)
#define MOD_DBS (0x80b29727UL)
#define MOD_FLM (0xe7ba53a4UL)
@@ -46,7 +47,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (33)
+#define MOD_IDX_COUNT (34)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 50/86] net/ntnic: add Tx INS module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (48 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 49/86] net/ntnic: add Tx CPY " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 51/86] net/ntnic: add Tx RPL " Serhii Iliushyk
` (36 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Inserter module injects zeros into an offset of a packet,
effectively expanding the packet.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 19 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 60fd748ea2..c8841b1dc2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1457,6 +1457,22 @@ static nthw_fpga_register_init_s iic_registers[] = {
{ IIC_TX_FIFO_OCY, 69, 4, NTHW_FPGA_REG_TYPE_RO, 0, 1, iic_tx_fifo_ocy_fields },
};
+static nthw_fpga_field_init_s ins_rcp_ctrl_fields[] = {
+ { INS_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { INS_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ins_rcp_data_fields[] = {
+ { INS_RCP_DATA_DYN, 5, 0, 0x0000 },
+ { INS_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { INS_RCP_DATA_OFS, 10, 5, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ins_registers[] = {
+ { INS_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ins_rcp_ctrl_fields },
+ { INS_RCP_DATA, 1, 23, NTHW_FPGA_REG_TYPE_WO, 0, 3, ins_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s km_cam_ctrl_fields[] = {
{ KM_CAM_CTRL_ADR, 13, 0, 0x0000 },
{ KM_CAM_CTRL_CNT, 16, 16, 0x0000 },
@@ -2481,6 +2497,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
+ { MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2639,5 +2656,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0ab5ae0310..8c0c727e16 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -28,6 +28,7 @@
#define MOD_I2CM (0x93bc7780UL)
#define MOD_IFR (0x9b01f1e6UL)
#define MOD_IIC (0x7629cddbUL)
+#define MOD_INS (0x24df4b78UL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
@@ -47,7 +48,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (34)
+#define MOD_IDX_COUNT (35)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 51/86] net/ntnic: add Tx RPL module registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (49 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 50/86] net/ntnic: add Tx INS " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 52/86] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
` (35 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Replacer module can replace a range of bytes in a packet.
The replacing data is stored in a table in the module
and will often contain tunnel data.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index c8841b1dc2..a3d9f94fc6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2355,6 +2355,44 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpl_ext_ctrl_fields[] = {
+ { RPL_EXT_CTRL_ADR, 10, 0, 0x0000 },
+ { RPL_EXT_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_ext_data_fields[] = {
+ { RPL_EXT_DATA_RPL_PTR, 12, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_ctrl_fields[] = {
+ { RPL_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPL_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_data_fields[] = {
+ { RPL_RCP_DATA_DYN, 5, 0, 0x0000 }, { RPL_RCP_DATA_ETH_TYPE_WR, 1, 36, 0x0000 },
+ { RPL_RCP_DATA_EXT_PRIO, 1, 35, 0x0000 }, { RPL_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { RPL_RCP_DATA_OFS, 10, 5, 0x0000 }, { RPL_RCP_DATA_RPL_PTR, 12, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_ctrl_fields[] = {
+ { RPL_RPL_CTRL_ADR, 12, 0, 0x0000 },
+ { RPL_RPL_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_data_fields[] = {
+ { RPL_RPL_DATA_VALUE, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpl_registers[] = {
+ { RPL_EXT_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_ext_ctrl_fields },
+ { RPL_EXT_DATA, 3, 12, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_ext_data_fields },
+ { RPL_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rcp_ctrl_fields },
+ { RPL_RCP_DATA, 1, 37, NTHW_FPGA_REG_TYPE_WO, 0, 6, rpl_rcp_data_fields },
+ { RPL_RPL_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rpl_ctrl_fields },
+ { RPL_RPL_DATA, 5, 128, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_rpl_data_fields },
+};
+
static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
{ RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2498,6 +2536,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
+ { MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2656,5 +2695,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 8c0c727e16..2b059d98ff 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -40,6 +40,7 @@
#define MOD_QSL (0x448ed859UL)
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
+#define MOD_RPL (0x6de535c3UL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
@@ -48,7 +49,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (35)
+#define MOD_IDX_COUNT (36)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 52/86] net/ntnic: update alignment for virt queue structs
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (50 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 51/86] net/ntnic: add Tx RPL " Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 53/86] net/ntnic: enable RSS feature Serhii Iliushyk
` (34 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update incorrect alignment
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Fix __rte_packed usage
Original NT PMD driver use pragma pack(1) wich is similar with
combination attributes packed and aligned
In this case aligned(1) can be ignored in case of use
attribute packed
---
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
index bde0fed273..e46a3bef28 100644
--- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
+++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <rte_common.h>
#include <unistd.h>
#include "ntos_drv.h"
@@ -67,20 +68,20 @@
} \
} while (0)
-struct __rte_aligned(8) virtq_avail {
+struct __rte_packed virtq_avail {
uint16_t flags;
uint16_t idx;
uint16_t ring[]; /* Queue Size */
};
-struct __rte_aligned(8) virtq_used_elem {
+struct __rte_packed virtq_used_elem {
/* Index of start of used descriptor chain. */
uint32_t id;
/* Total length of the descriptor chain which was used (written to) */
uint32_t len;
};
-struct __rte_aligned(8) virtq_used {
+struct __rte_packed virtq_used {
uint16_t flags;
uint16_t idx;
struct virtq_used_elem ring[]; /* Queue Size */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 53/86] net/ntnic: enable RSS feature
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (51 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 52/86] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 54/86] net/ntnic: add statistics API Serhii Iliushyk
` (33 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable receive side scaling
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v4
* Use RTE_MIN instead of the ternary operator.
---
doc/guides/nics/features/ntnic.ini | 3 +
drivers/net/ntnic/include/create_elements.h | 1 +
drivers/net/ntnic/include/flow_api.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 6 ++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 75 +++++++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 73 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 210 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4cb9509742..e5d5abd0ed 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -10,6 +10,8 @@ Link status = Y
Queue start/stop = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
Linux = Y
x86-64 = Y
@@ -37,3 +39,4 @@ port_id = Y
queue = Y
raw_decap = Y
raw_encap = Y
+rss = Y
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 70e6cad195..eaa578e72a 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,7 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_rss flow_rss;
struct flow_action_raw_encap encap;
struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 2e96fa5bed..4a1525f237 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -114,6 +114,8 @@ struct flow_nic_dev {
struct flow_eth_dev *eth_base;
pthread_mutex_t mtx;
+ /* RSS hashing configuration */
+ struct nt_eth_rss_conf rss_conf;
/* next NIC linked list */
struct flow_nic_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34f2cad2cd..d61044402d 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1061,6 +1061,12 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+
+ /*
+ * Other
+ */
+ .hw_mod_hsh_rcp_flush = hw_mod_hsh_rcp_flush,
+ .flow_nic_set_hasher_fields = flow_nic_set_hasher_fields,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index af07819a0c..73e3c05f56 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -603,6 +603,49 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RSS", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_rss rss_tmp;
+ const struct rte_flow_action_rss *rss =
+ memcpy_mask_if(&rss_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_rss));
+
+ if (rss->key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: RSS hash key length %u exceeds maximum value %u",
+ rss->key_len, MAX_RSS_KEY_LEN);
+ flow_nic_set_error(ERR_RSS_TOO_LONG_KEY, error);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < rss->queue_num; ++i) {
+ int hw_id = rx_queue_idx_to_hw_id(dev, rss->queue[i]);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+ }
+
+ fd->hsh.func = rss->func;
+ fd->hsh.types = rss->types;
+ fd->hsh.key = rss->key;
+ fd->hsh.key_len = rss->key_len;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RSS func: %d, types: 0x%" PRIX64 ", key_len: %d",
+ dev, rss->func, rss->types, rss->key_len);
+
+ fd->full_offload = 0;
+ *num_queues += rss->queue_num;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MARK:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bfca8f28b1..91be894e87 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -214,6 +214,14 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_rx_pktlen = HW_MAX_PKT_LEN;
dev_info->max_mtu = MAX_MTU;
+ if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
+ dev_info->hash_key_size = MAX_RSS_KEY_LEN;
+
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ);
+ }
+
if (internals->p_drv) {
dev_info->max_rx_queues = internals->nb_rx_queues;
dev_info->max_tx_queues = internals->nb_tx_queues;
@@ -1372,6 +1380,71 @@ promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
return 0;
}
+static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+ struct nt_eth_rss_conf tmp_rss_conf = { 0 };
+ const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
+
+ if (rss_conf->rss_key != NULL) {
+ if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, NTNIC,
+ "ERROR: - RSS hash key length %u exceeds maximum value %u",
+ rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ return -1;
+ }
+
+ rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+ }
+
+ tmp_rss_conf.algorithm = rss_conf->algorithm;
+
+ tmp_rss_conf.rss_hf = rss_conf->rss_hf;
+ int res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
+
+ if (res == 0) {
+ flow_filter_ops->hw_mod_hsh_rcp_flush(&ndev->be, hsh_idx, 1);
+ rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
+
+ } else {
+ NT_LOG(ERR, NTNIC, "ERROR: - RSS hash update failed with error %i", res);
+ }
+
+ return res;
+}
+
+static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+
+ rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
+
+ rss_conf->rss_hf = ndev->rss_conf.rss_hf;
+
+ /*
+ * copy full stored key into rss_key and pad it with
+ * zeros up to rss_key_len / MAX_RSS_KEY_LEN
+ */
+ if (rss_conf->rss_key != NULL) {
+ int key_len = RTE_MIN(rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
+ rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
+ rss_conf->rss_key_len = key_len;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
@@ -1395,6 +1468,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
+ .rss_hash_update = eth_dev_rss_hash_update,
+ .rss_hash_conf_get = rss_hash_conf_get,
};
/*
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 93d89d59f3..a435b60fb2 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -330,6 +330,79 @@ int create_action_elements_inline(struct cnv_action_s *action,
* Non-compatible actions handled here
*/
switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)actions[aidx].conf;
+
+ switch (rss->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ;
+
+ if (rte_is_power_of_2(rss->queue_num) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - for Toeplitz the number of queues must be power of two");
+ return -1;
+ }
+
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT:
+ case RTE_ETH_HASH_FUNCTION_MAX:
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported function: %u",
+ rss->func);
+ return -1;
+ }
+
+ uint64_t tmp_rss_types = 0;
+
+ switch (rss->level) {
+ case 1:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_OUTERMOST;
+ break;
+
+ case 2:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_INNERMOST;
+ break;
+
+ case 0:
+ /* keep level mask specified at types */
+ action->flow_rss.types = rss->types;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported level: %u",
+ rss->level);
+ return -1;
+ }
+
+ action->flow_rss.level = 0;
+ action->flow_rss.key_len = rss->key_len;
+ action->flow_rss.queue_num = rss->queue_num;
+ action->flow_rss.key = rss->key;
+ action->flow_rss.queue = rss->queue;
+ action->flow_actions[aidx].conf = &action->flow_rss;
+ }
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
const struct rte_flow_action_raw_decap *decap =
(const struct rte_flow_action_raw_decap *)actions[aidx]
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 12baa13800..e40ed9b949 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -316,6 +316,13 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+
+ /*
+ * Other
+ */
+ int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+ int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 54/86] net/ntnic: add statistics API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (52 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 53/86] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 55/86] net/ntnic: add rpf module Serhii Iliushyk
` (32 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Statistics init, setup, get, reset APIs and their
implementation were added.
Statistics fpga defines were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 192 +++++++++
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 149 +++++++
drivers/net/ntnic/include/ntos_drv.h | 9 +
.../ntnic/include/stream_binary_flow_api.h | 5 +
drivers/net/ntnic/meson.build | 3 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 1 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 10 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 370 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 40 ++
drivers/net/ntnic/ntnic_ethdev.c | 119 +++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 132 +++++++
drivers/net/ntnic/ntnic_mod_reg.c | 30 ++
drivers/net/ntnic/ntnic_mod_reg.h | 17 +
drivers/net/ntnic/ntutil/nt_util.h | 1 +
21 files changed, 1119 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_adapter.c b/drivers/net/ntnic/adapter/nt4ga_adapter.c
index d9e6716c30..fa72dfda8d 100644
--- a/drivers/net/ntnic/adapter/nt4ga_adapter.c
+++ b/drivers/net/ntnic/adapter/nt4ga_adapter.c
@@ -212,19 +212,26 @@ static int nt4ga_adapter_init(struct adapter_info_s *p_adapter_info)
}
}
- nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
- if (p_nthw_rmc == NULL) {
- NT_LOG(ERR, NTNIC, "Failed to allocate memory for RMC module");
- return -1;
- }
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
- res = nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
- if (res) {
- NT_LOG(ERR, NTNIC, "Failed to initialize RMC module");
- return -1;
- }
+ if (nt4ga_stat_ops != NULL) {
+ /* Nt4ga Stat init/setup */
+ res = nt4ga_stat_ops->nt4ga_stat_init(p_adapter_info);
+
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot initialize the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+
+ res = nt4ga_stat_ops->nt4ga_stat_setup(p_adapter_info);
- nthw_rmc_unblock(p_nthw_rmc, false);
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot setup the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+ }
return 0;
}
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
new file mode 100644
index 0000000000..0e20f3ea45
--- /dev/null
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -0,0 +1,192 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+#include "nt_util.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "nthw_fpga_param_defs.h"
+#include "nt4ga_adapter.h"
+#include "ntnic_nim.h"
+#include "flow_filter.h"
+#include "ntnic_mod_reg.h"
+
+#define DEFAULT_MAX_BPS_SPEED 100e9
+
+static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
+{
+ const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
+ fpga_info_t *fpga_info = &p_adapter_info->fpga_info;
+ nthw_fpga_t *p_fpga = fpga_info->mp_fpga;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+
+ if (p_nt4ga_stat) {
+ memset(p_nt4ga_stat, 0, sizeof(nt4ga_stat_t));
+
+ } else {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ {
+ nthw_stat_t *p_nthw_stat = nthw_stat_new();
+
+ if (!p_nthw_stat) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ if (nthw_rmc_init(NULL, p_fpga, 0) == 0) {
+ nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
+
+ if (!p_nthw_rmc) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
+ p_nt4ga_stat->mp_nthw_rmc = p_nthw_rmc;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rmc = NULL;
+ }
+
+ p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
+ nthw_stat_init(p_nthw_stat, p_fpga, 0);
+
+ p_nt4ga_stat->mn_rx_host_buffers = p_nthw_stat->m_nb_rx_host_buffers;
+ p_nt4ga_stat->mn_tx_host_buffers = p_nthw_stat->m_nb_tx_host_buffers;
+
+ p_nt4ga_stat->mn_rx_ports = p_nthw_stat->m_nb_rx_ports;
+ p_nt4ga_stat->mn_tx_ports = p_nthw_stat->m_nb_tx_ports;
+ }
+
+ return 0;
+}
+
+static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
+{
+ const int n_physical_adapter_no = p_adapter_info->adapter_no;
+ (void)n_physical_adapter_no;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+
+ /* Allocate and map memory for fpga statistics */
+ {
+ uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
+ sizeof(p_nthw_stat->mp_timestamp));
+ struct nt_dma_s *p_dma;
+ int numa_node = p_adapter_info->fpga_info.numa_node;
+
+ /* FPGA needs a 16K alignment on Statistics */
+ p_dma = nt_dma_alloc(n_stat_size, 0x4000, numa_node);
+
+ if (!p_dma) {
+ NT_LOG_DBGX(ERR, NTNIC, "p_dma alloc failed");
+ return -1;
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%x @%d %" PRIx64 " %" PRIx64, n_stat_size, numa_node,
+ p_dma->addr, p_dma->iova);
+
+ NT_LOG(DBG, NTNIC,
+ "DMA: Physical adapter %02d, PA = 0x%016" PRIX64 " DMA = 0x%016" PRIX64
+ " size = 0x%" PRIX32 "",
+ n_physical_adapter_no, p_dma->iova, p_dma->addr, n_stat_size);
+
+ p_nt4ga_stat->p_stat_dma_virtual = (uint32_t *)p_dma->addr;
+ p_nt4ga_stat->n_stat_size = n_stat_size;
+ p_nt4ga_stat->p_stat_dma = p_dma;
+
+ memset(p_nt4ga_stat->p_stat_dma_virtual, 0xaa, n_stat_size);
+ nthw_stat_set_dma_address(p_nthw_stat, p_dma->iova,
+ p_nt4ga_stat->p_stat_dma_virtual);
+ }
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+
+ p_nt4ga_stat->mp_stat_structs_color =
+ calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_color) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_hb =
+ calloc(p_nt4ga_stat->mn_rx_host_buffers + p_nt4ga_stat->mn_tx_host_buffers,
+ sizeof(struct host_buffer_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_hb) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_rx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_tx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_port_load =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
+
+ if (!p_nt4ga_stat->mp_port_load) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+#ifdef NIM_TRIGGER
+ uint64_t max_bps_speed = nt_get_max_link_speed(p_adapter_info->nt4ga_link.speed_capa);
+
+ if (max_bps_speed == 0)
+ max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+
+#else
+ uint64_t max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+ NT_LOG(ERR, NTNIC, "NIM module not included");
+#endif
+
+ for (int p = 0; p < NUM_ADAPTER_PORTS_MAX; p++) {
+ p_nt4ga_stat->mp_port_load[p].rx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].tx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].rx_pps_max = max_bps_speed / (8 * (20 + 64));
+ p_nt4ga_stat->mp_port_load[p].tx_pps_max = max_bps_speed / (8 * (20 + 64));
+ }
+
+ memset(p_nt4ga_stat->a_stat_structs_color_base, 0,
+ sizeof(struct color_counters) * NT_MAX_COLOR_FLOW_STATS);
+ p_nt4ga_stat->last_timestamp = 0;
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ return 0;
+}
+
+static struct nt4ga_stat_ops ops = {
+ .nt4ga_stat_init = nt4ga_stat_init,
+ .nt4ga_stat_setup = nt4ga_stat_setup,
+};
+
+void nt4ga_stat_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "Stat module was initialized");
+ register_nt4ga_stat_ops(&ops);
+}
diff --git a/drivers/net/ntnic/include/common_adapter_defs.h b/drivers/net/ntnic/include/common_adapter_defs.h
new file mode 100644
index 0000000000..6ed9121f0f
--- /dev/null
+++ b/drivers/net/ntnic/include/common_adapter_defs.h
@@ -0,0 +1,15 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _COMMON_ADAPTER_DEFS_H_
+#define _COMMON_ADAPTER_DEFS_H_
+
+/*
+ * Declarations shared by NT adapter types.
+ */
+#define NUM_ADAPTER_MAX (8)
+#define NUM_ADAPTER_PORTS_MAX (128)
+
+#endif /* _COMMON_ADAPTER_DEFS_H_ */
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index eaa578e72a..1456977837 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -46,6 +46,10 @@ struct rte_flow {
uint32_t flow_stat_id;
+ uint64_t stat_pkts;
+ uint64_t stat_bytes;
+ uint8_t stat_tcp_flags;
+
uint16_t caller_id;
};
diff --git a/drivers/net/ntnic/include/nt4ga_adapter.h b/drivers/net/ntnic/include/nt4ga_adapter.h
index 809135f130..fef79ce358 100644
--- a/drivers/net/ntnic/include/nt4ga_adapter.h
+++ b/drivers/net/ntnic/include/nt4ga_adapter.h
@@ -6,6 +6,7 @@
#ifndef _NT4GA_ADAPTER_H_
#define _NT4GA_ADAPTER_H_
+#include "ntnic_stat.h"
#include "nt4ga_link.h"
typedef struct hw_info_s {
@@ -30,6 +31,7 @@ typedef struct hw_info_s {
#include "ntnic_stat.h"
typedef struct adapter_info_s {
+ struct nt4ga_stat_s nt4ga_stat;
struct nt4ga_filter_s nt4ga_filter;
struct nt4ga_link_s nt4ga_link;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8ebdd98db0..1135e9a539 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -15,6 +15,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
+ pthread_mutex_t stat_lck;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 148088fe1d..2aee3f8425 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -6,6 +6,155 @@
#ifndef NTNIC_STAT_H_
#define NTNIC_STAT_H_
+#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_fpga_model.h"
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+struct nthw_stat {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_stat;
+ int mn_instance;
+
+ int mn_stat_layout_version;
+
+ bool mb_has_tx_stats;
+
+ int m_nb_phy_ports;
+ int m_nb_nim_ports;
+
+ int m_nb_rx_ports;
+ int m_nb_tx_ports;
+
+ int m_nb_rx_host_buffers;
+ int m_nb_tx_host_buffers;
+
+ int m_dbs_present;
+
+ int m_rx_port_replicate;
+
+ int m_nb_color_counters;
+
+ int m_nb_rx_hb_counters;
+ int m_nb_tx_hb_counters;
+
+ int m_nb_rx_port_counters;
+ int m_nb_tx_port_counters;
+
+ int m_nb_counters;
+
+ int m_nb_rpp_per_ps;
+
+ nthw_field_t *mp_fld_dma_ena;
+ nthw_field_t *mp_fld_cnt_clear;
+
+ nthw_field_t *mp_fld_tx_disable;
+
+ nthw_field_t *mp_fld_cnt_freeze;
+
+ nthw_field_t *mp_fld_stat_toggle_missed;
+
+ nthw_field_t *mp_fld_dma_lsb;
+ nthw_field_t *mp_fld_dma_msb;
+
+ nthw_field_t *mp_fld_load_bin;
+ nthw_field_t *mp_fld_load_bps_rx0;
+ nthw_field_t *mp_fld_load_bps_rx1;
+ nthw_field_t *mp_fld_load_bps_tx0;
+ nthw_field_t *mp_fld_load_bps_tx1;
+ nthw_field_t *mp_fld_load_pps_rx0;
+ nthw_field_t *mp_fld_load_pps_rx1;
+ nthw_field_t *mp_fld_load_pps_tx0;
+ nthw_field_t *mp_fld_load_pps_tx1;
+
+ uint64_t m_stat_dma_physical;
+ uint32_t *mp_stat_dma_virtual;
+
+ uint64_t *mp_timestamp;
+};
+
+typedef struct nthw_stat nthw_stat_t;
+typedef struct nthw_stat nthw_stat;
+
+struct color_counters {
+ uint64_t color_packets;
+ uint64_t color_bytes;
+ uint8_t tcp_flags;
+};
+
+struct host_buffer_counters {
+};
+
+struct port_load_counters {
+ uint64_t rx_pps_max;
+ uint64_t tx_pps_max;
+ uint64_t rx_bps_max;
+ uint64_t tx_bps_max;
+};
+
+struct port_counters_v2 {
+};
+
+struct flm_counters_v1 {
+};
+
+struct nt4ga_stat_s {
+ nthw_stat_t *mp_nthw_stat;
+ nthw_rmc_t *mp_nthw_rmc;
+ struct nt_dma_s *p_stat_dma;
+ uint32_t *p_stat_dma_virtual;
+ uint32_t n_stat_size;
+
+ uint64_t last_timestamp;
+
+ int mn_rx_host_buffers;
+ int mn_tx_host_buffers;
+
+ int mn_rx_ports;
+ int mn_tx_ports;
+
+ struct color_counters *mp_stat_structs_color;
+ /* For calculating increments between stats polls */
+ struct color_counters a_stat_structs_color_base[NT_MAX_COLOR_FLOW_STATS];
+
+ /* Port counters for inline */
+ struct {
+ struct port_counters_v2 *mp_stat_structs_port_rx;
+ struct port_counters_v2 *mp_stat_structs_port_tx;
+ } cap;
+
+ struct host_buffer_counters *mp_stat_structs_hb;
+ struct port_load_counters *mp_port_load;
+
+ /* Rx/Tx totals: */
+ uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
+
+ uint64_t a_port_rx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ /* Base is for calculating increments between statistics reads */
+ uint64_t a_port_rx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_packets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_packets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_drops_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_drops_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+};
+
+typedef struct nt4ga_stat_s nt4ga_stat_t;
+
+nthw_stat_t *nthw_stat_new(void);
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_stat_delete(nthw_stat_t *p);
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual);
+int nthw_stat_trigger(nthw_stat_t *p);
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 8fd577dfe3..7b3c8ff3d6 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -57,6 +57,9 @@ struct __rte_cache_aligned ntnic_rx_queue {
struct flow_queue_id_s queue; /* queue info - user id and hw queue index */
struct rte_mempool *mb_pool; /* mbuf memory pool */
uint16_t buf_size; /* Size of data area in mbuf */
+ unsigned long rx_pkts; /* Rx packet statistics */
+ unsigned long rx_bytes; /* Rx bytes statistics */
+ unsigned long err_pkts; /* Rx error packet statistics */
int enabled; /* Enabling/disabling of this queue */
struct hwq_s hwq;
@@ -80,6 +83,9 @@ struct __rte_cache_aligned ntnic_tx_queue {
int rss_target_id;
uint32_t port; /* Tx port for this queue */
+ unsigned long tx_pkts; /* Tx packet statistics */
+ unsigned long tx_bytes; /* Tx bytes statistics */
+ unsigned long err_pkts; /* Tx error packet stat */
int enabled; /* Enabling/disabling of this queue */
enum fpga_info_profile profile; /* Inline / Capture */
};
@@ -95,6 +101,7 @@ struct pmd_internals {
/* Offset of the VF from the PF */
uint8_t vf_offset;
uint32_t port;
+ uint32_t port_id;
nt_meta_port_type_t type;
struct flow_queue_id_s vpq[MAX_QUEUES];
unsigned int vpq_nb_vq;
@@ -107,6 +114,8 @@ struct pmd_internals {
struct rte_ether_addr eth_addrs[NUM_MAC_ADDRS_PER_PORT];
/* Multicast ethernet (MAC) addresses. */
struct rte_ether_addr mc_addrs[NUM_MULTICAST_ADDRS_PER_PORT];
+ uint64_t last_stat_rtc;
+ uint64_t rx_missed;
struct pmd_internals *next;
};
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index e5fe686d99..4ce1561033 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,7 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include <rte_ether.h>
#include "rte_flow.h"
#include "rte_flow_driver.h"
@@ -44,6 +45,10 @@
#define FLOW_MAX_QUEUES 128
#define RAW_ENCAP_DECAP_ELEMS_MAX 16
+
+extern uint64_t rte_tsc_freq;
+extern rte_spinlock_t hwlock;
+
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 92167d24e4..216341bb11 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -25,10 +25,12 @@ includes = [
# all sources
sources = files(
'adapter/nt4ga_adapter.c',
+ 'adapter/nt4ga_stat/nt4ga_stat.c',
'dbsconfig/ntnic_dbsconfig.c',
'link_mgmt/link_100g/nt4ga_link_100g.c',
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
+ 'ntnic_filter/ntnic_filter.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
@@ -48,6 +50,7 @@ sources = files(
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
+ 'nthw/stat/nthw_stat.c',
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index 2345820bdc..b239752674 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -44,6 +44,7 @@ typedef struct nthw_rmc nthw_rmc;
nthw_rmc_t *nthw_rmc_new(void);
int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 4a01424c24..748519aeb4 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,16 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+void nthw_rmc_block(nthw_rmc_t *p)
+{
+ /* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
+ if (!p->mb_administrative_block) {
+ nthw_field_set_flush(p->mp_fld_ctrl_block_stat_drop);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_keep_alive);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_mac_port);
+ }
+}
+
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary)
{
uint32_t n_block_mask = ~0U << (b_is_secondary ? p->mn_nims : p->mn_ports);
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
new file mode 100644
index 0000000000..6adcd2e090
--- /dev/null
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -0,0 +1,370 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "nt_util.h"
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "ntnic_stat.h"
+
+#include <malloc.h>
+
+nthw_stat_t *nthw_stat_new(void)
+{
+ nthw_stat_t *p = malloc(sizeof(nthw_stat_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_stat_t));
+
+ return p;
+}
+
+void nthw_stat_delete(nthw_stat_t *p)
+{
+ if (p)
+ free(p);
+}
+
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ uint64_t n_module_version_packed64 = -1;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_STA, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: STAT %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_stat = mod;
+
+ n_module_version_packed64 = nthw_module_get_version_packed64(p->mp_mod_stat);
+ NT_LOG(DBG, NTHW, "%s: STAT %d: version=0x%08lX", p_adapter_id_str, p->mn_instance,
+ n_module_version_packed64);
+
+ {
+ nthw_register_t *p_reg;
+ /* STA_CFG register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_CFG);
+ p->mp_fld_dma_ena = nthw_register_get_field(p_reg, STA_CFG_DMA_ENA);
+ p->mp_fld_cnt_clear = nthw_register_get_field(p_reg, STA_CFG_CNT_CLEAR);
+
+ /* CFG: fields NOT available from v. 3 */
+ p->mp_fld_tx_disable = nthw_register_query_field(p_reg, STA_CFG_TX_DISABLE);
+ p->mp_fld_cnt_freeze = nthw_register_query_field(p_reg, STA_CFG_CNT_FRZ);
+
+ /* STA_STATUS register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_STATUS);
+ p->mp_fld_stat_toggle_missed =
+ nthw_register_get_field(p_reg, STA_STATUS_STAT_TOGGLE_MISSED);
+
+ /* HOST_ADR registers */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_LSB);
+ p->mp_fld_dma_lsb = nthw_register_get_field(p_reg, STA_HOST_ADR_LSB_LSB);
+
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_MSB);
+ p->mp_fld_dma_msb = nthw_register_get_field(p_reg, STA_HOST_ADR_MSB_MSB);
+
+ /* Binning cycles */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BIN);
+
+ if (p_reg) {
+ p->mp_fld_load_bin = nthw_register_get_field(p_reg, STA_LOAD_BIN_BIN);
+
+ /* Bandwidth load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx0 = NULL;
+ }
+
+ /* Bandwidth load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx1 = NULL;
+ }
+
+ /* Bandwidth load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx0 = NULL;
+ }
+
+ /* Bandwidth load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx1 = NULL;
+ }
+
+ /* Packet load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx0 = NULL;
+ }
+
+ /* Packet load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx1 = NULL;
+ }
+
+ /* Packet load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx0 = NULL;
+ }
+
+ /* Packet load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+
+ } else {
+ p->mp_fld_load_bin = NULL;
+ p->mp_fld_load_bps_rx0 = NULL;
+ p->mp_fld_load_bps_rx1 = NULL;
+ p->mp_fld_load_bps_tx0 = NULL;
+ p->mp_fld_load_bps_tx1 = NULL;
+ p->mp_fld_load_pps_rx0 = NULL;
+ p->mp_fld_load_pps_rx1 = NULL;
+ p->mp_fld_load_pps_tx0 = NULL;
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+ }
+
+ /* Params */
+ p->m_nb_nim_ports = nthw_fpga_get_product_param(p_fpga, NT_NIMS, 0);
+ p->m_nb_phy_ports = nthw_fpga_get_product_param(p_fpga, NT_PHY_PORTS, 0);
+
+ /* VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_STA_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_PORTS, 0);
+ }
+ }
+
+ p->m_nb_rpp_per_ps = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+
+ p->m_nb_tx_ports = nthw_fpga_get_product_param(p_fpga, NT_TX_PORTS, 0);
+ p->m_rx_port_replicate = nthw_fpga_get_product_param(p_fpga, NT_RX_PORT_REPLICATE, 0);
+
+ /* VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_STA_COLORS, 64) * 2;
+
+ if (p->m_nb_color_counters == 0) {
+ /* non-VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_CAT_FUNCS, 0) * 2;
+ }
+
+ p->m_nb_rx_host_buffers = nthw_fpga_get_product_param(p_fpga, NT_QUEUES, 0);
+ p->m_nb_tx_host_buffers = p->m_nb_rx_host_buffers;
+
+ p->m_dbs_present = nthw_fpga_get_product_param(p_fpga, NT_DBS_PRESENT, 0);
+
+ p->m_nb_rx_hb_counters = (p->m_nb_rx_host_buffers * (6 + 2 *
+ (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ?
+ p->m_dbs_present : 0)));
+
+ p->m_nb_tx_hb_counters = 0;
+
+ p->m_nb_rx_port_counters = 42 +
+ 2 * (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ? p->m_dbs_present : 0);
+ p->m_nb_tx_port_counters = 0;
+
+ p->m_nb_counters =
+ p->m_nb_color_counters + p->m_nb_rx_hb_counters + p->m_nb_tx_hb_counters;
+
+ p->mn_stat_layout_version = 0;
+
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 9)) {
+ p->mn_stat_layout_version = 7;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 8)) {
+ p->mn_stat_layout_version = 6;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->mn_stat_layout_version = 5;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 4)) {
+ p->mn_stat_layout_version = 4;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 3)) {
+ p->mn_stat_layout_version = 3;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 2)) {
+ p->mn_stat_layout_version = 2;
+
+ } else if (n_module_version_packed64 > VERSION_PACKED64(0, 0)) {
+ p->mn_stat_layout_version = 1;
+
+ } else {
+ p->mn_stat_layout_version = 0;
+ NT_LOG(ERR, NTHW, "%s: unknown module_version 0x%08lX layout=%d",
+ p_adapter_id_str, n_module_version_packed64, p->mn_stat_layout_version);
+ }
+
+ assert(p->mn_stat_layout_version);
+
+ /* STA module 0.2+ adds IPF counters per port (Rx feature) */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 2))
+ p->m_nb_rx_port_counters += 6;
+
+ /* STA module 0.3+ adds TX stats */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3) || p->m_nb_tx_ports >= 1)
+ p->mb_has_tx_stats = true;
+
+ /* STA module 0.3+ adds TX stat counters */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3))
+ p->m_nb_tx_port_counters += 22;
+
+ /* STA module 0.4+ adds TX drop event counter */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 4))
+ p->m_nb_tx_port_counters += 1; /* TX drop event counter */
+
+ /*
+ * STA module 0.6+ adds pkt filter drop octets+pkts, retransmit and
+ * duplicate counters
+ */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->m_nb_rx_port_counters += 4;
+ p->m_nb_tx_port_counters += 1;
+ }
+
+ p->m_nb_counters += (p->m_nb_rx_ports * p->m_nb_rx_port_counters);
+
+ if (p->mb_has_tx_stats)
+ p->m_nb_counters += (p->m_nb_tx_ports * p->m_nb_tx_port_counters);
+
+ /* Output params (debug) */
+ NT_LOG(DBG, NTHW, "%s: nims=%d rxports=%d txports=%d rxrepl=%d colors=%d queues=%d",
+ p_adapter_id_str, p->m_nb_nim_ports, p->m_nb_rx_ports, p->m_nb_tx_ports,
+ p->m_rx_port_replicate, p->m_nb_color_counters, p->m_nb_rx_host_buffers);
+ NT_LOG(DBG, NTHW, "%s: hbs=%d hbcounters=%d rxcounters=%d txcounters=%d",
+ p_adapter_id_str, p->m_nb_rx_host_buffers, p->m_nb_rx_hb_counters,
+ p->m_nb_rx_port_counters, p->m_nb_tx_port_counters);
+ NT_LOG(DBG, NTHW, "%s: layout=%d", p_adapter_id_str, p->mn_stat_layout_version);
+ NT_LOG(DBG, NTHW, "%s: counters=%d (0x%X)", p_adapter_id_str, p->m_nb_counters,
+ p->m_nb_counters);
+
+ /* Init */
+ if (p->mp_fld_tx_disable)
+ nthw_field_set_flush(p->mp_fld_tx_disable);
+
+ nthw_field_update_register(p->mp_fld_cnt_clear);
+ nthw_field_set_flush(p->mp_fld_cnt_clear);
+ nthw_field_clr_flush(p->mp_fld_cnt_clear);
+
+ nthw_field_update_register(p->mp_fld_stat_toggle_missed);
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_clr_flush(p->mp_fld_dma_ena);
+ nthw_field_update_register(p->mp_fld_dma_ena);
+
+ /* Set the sliding windows size for port load */
+ if (p->mp_fld_load_bin) {
+ uint32_t rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ uint32_t bin =
+ (uint32_t)(((PORT_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) -
+ 1ULL);
+ nthw_field_set_val_flush32(p->mp_fld_load_bin, bin);
+ }
+
+ return 0;
+}
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual)
+{
+ assert(p_stat_dma_virtual);
+ p->mp_timestamp = NULL;
+
+ p->m_stat_dma_physical = stat_dma_physical;
+ p->mp_stat_dma_virtual = p_stat_dma_virtual;
+
+ memset(p->mp_stat_dma_virtual, 0, (p->m_nb_counters * sizeof(uint32_t)));
+
+ nthw_field_set_val_flush32(p->mp_fld_dma_msb,
+ (uint32_t)((p->m_stat_dma_physical >> 32) & 0xffffffff));
+ nthw_field_set_val_flush32(p->mp_fld_dma_lsb,
+ (uint32_t)(p->m_stat_dma_physical & 0xffffffff));
+
+ p->mp_timestamp = (uint64_t *)(p->mp_stat_dma_virtual + p->m_nb_counters);
+ NT_LOG(DBG, NTHW,
+ "stat_dma_physical=%" PRIX64 " p_stat_dma_virtual=%" PRIX64
+ " mp_timestamp=%" PRIX64 "", p->m_stat_dma_physical,
+ (uint64_t)p->mp_stat_dma_virtual, (uint64_t)p->mp_timestamp);
+ *p->mp_timestamp = (uint64_t)(int64_t)-1;
+ return 0;
+}
+
+int nthw_stat_trigger(nthw_stat_t *p)
+{
+ int n_toggle_miss = nthw_field_get_updated(p->mp_fld_stat_toggle_missed);
+
+ if (n_toggle_miss)
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ if (p->mp_timestamp)
+ *p->mp_timestamp = -1; /* Clear old ts */
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_set_flush(p->mp_fld_dma_ena);
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 2b059d98ff..ddc144dc02 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -46,6 +46,7 @@
#define MOD_SDC (0xd2369530UL)
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
+#define MOD_STA (0x76fae64dUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7741aa563f..8f196f885f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -45,6 +45,7 @@
#include "nthw_fpga_reg_defs_sdc.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
+#include "nthw_fpga_reg_defs_sta.h"
#include "nthw_fpga_reg_defs_tx_cpy.h"
#include "nthw_fpga_reg_defs_tx_ins.h"
#include "nthw_fpga_reg_defs_tx_rpl.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
new file mode 100644
index 0000000000..640ffcbc52
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -0,0 +1,40 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_STA_
+#define _NTHW_FPGA_REG_DEFS_STA_
+
+/* STA */
+#define STA_CFG (0xcecaf9f4UL)
+#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
+#define STA_CFG_CNT_FRZ (0x8c27a596UL)
+#define STA_CFG_DMA_ENA (0x940dbacUL)
+#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_HOST_ADR_LSB (0xde569336UL)
+#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
+#define STA_HOST_ADR_MSB (0xdf94f901UL)
+#define STA_HOST_ADR_MSB_MSB (0x114798c8UL)
+#define STA_LOAD_BIN (0x2e842591UL)
+#define STA_LOAD_BIN_BIN (0x1a2b942eUL)
+#define STA_LOAD_BPS_RX_0 (0xbf8f4595UL)
+#define STA_LOAD_BPS_RX_0_BPS (0x41647781UL)
+#define STA_LOAD_BPS_RX_1 (0xc8887503UL)
+#define STA_LOAD_BPS_RX_1_BPS (0x7c045e31UL)
+#define STA_LOAD_BPS_TX_0 (0x9ae41a49UL)
+#define STA_LOAD_BPS_TX_0_BPS (0x870b7e06UL)
+#define STA_LOAD_BPS_TX_1 (0xede32adfUL)
+#define STA_LOAD_BPS_TX_1_BPS (0xba6b57b6UL)
+#define STA_LOAD_PPS_RX_0 (0x811173c3UL)
+#define STA_LOAD_PPS_RX_0_PPS (0xbee573fcUL)
+#define STA_LOAD_PPS_RX_1 (0xf6164355UL)
+#define STA_LOAD_PPS_RX_1_PPS (0x83855a4cUL)
+#define STA_LOAD_PPS_TX_0 (0xa47a2c1fUL)
+#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
+#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
+#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_STATUS (0x91c5c51cUL)
+#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_STA_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 91be894e87..3d02e79691 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -65,6 +65,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+uint64_t rte_tsc_freq;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -88,7 +90,7 @@ static const struct rte_pci_id nthw_pci_id_map[] = {
static const struct sg_ops_s *sg_ops;
-static rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
+rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
/*
* Store and get adapter info
@@ -156,6 +158,102 @@ get_pdrv_from_pci(struct rte_pci_addr addr)
return p_drv;
}
+static int dpdk_stats_collect(struct pmd_internals *internals, struct rte_eth_stats *stats)
+{
+ const struct ntnic_filter_ops *ntnic_filter_ops = get_ntnic_filter_ops();
+
+ if (ntnic_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "ntnic_filter_ops uninitialized");
+ return -1;
+ }
+
+ unsigned int i;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t rx_total = 0;
+ uint64_t rx_total_b = 0;
+ uint64_t tx_total = 0;
+ uint64_t tx_total_b = 0;
+ uint64_t tx_err_total = 0;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || !stats || if_index < 0 ||
+ if_index > NUM_ADAPTER_PORTS_MAX) {
+ NT_LOG_DBGX(WRN, NTNIC, "error exit");
+ return -1;
+ }
+
+ /*
+ * Pull the latest port statistic numbers (Rx/Tx pkts and bytes)
+ * Return values are in the "internals->rxq_scg[]" and "internals->txq_scg[]" arrays
+ */
+ ntnic_filter_ops->poll_statistics(internals);
+
+ memset(stats, 0, sizeof(*stats));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_rx_queues; i++) {
+ stats->q_ipackets[i] = internals->rxq_scg[i].rx_pkts;
+ stats->q_ibytes[i] = internals->rxq_scg[i].rx_bytes;
+ rx_total += stats->q_ipackets[i];
+ rx_total_b += stats->q_ibytes[i];
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_tx_queues; i++) {
+ stats->q_opackets[i] = internals->txq_scg[i].tx_pkts;
+ stats->q_obytes[i] = internals->txq_scg[i].tx_bytes;
+ stats->q_errors[i] = internals->txq_scg[i].err_pkts;
+ tx_total += stats->q_opackets[i];
+ tx_total_b += stats->q_obytes[i];
+ tx_err_total += stats->q_errors[i];
+ }
+
+ stats->imissed = internals->rx_missed;
+ stats->ipackets = rx_total;
+ stats->ibytes = rx_total_b;
+ stats->opackets = tx_total;
+ stats->obytes = tx_total_b;
+ stats->oerrors = tx_err_total;
+
+ return 0;
+}
+
+static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s *p_nt_drv,
+ int n_intf_no)
+{
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ unsigned int i;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /* Rx */
+ for (i = 0; i < internals->nb_rx_queues; i++) {
+ internals->rxq_scg[i].rx_pkts = 0;
+ internals->rxq_scg[i].rx_bytes = 0;
+ internals->rxq_scg[i].err_pkts = 0;
+ }
+
+ internals->rx_missed = 0;
+
+ /* Tx */
+ for (i = 0; i < internals->nb_tx_queues; i++) {
+ internals->txq_scg[i].tx_pkts = 0;
+ internals->txq_scg[i].tx_bytes = 0;
+ internals->txq_scg[i].err_pkts = 0;
+ }
+
+ p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
+
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
static int
eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
{
@@ -194,6 +292,23 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return 0;
}
+static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ dpdk_stats_collect(internals, stats);
+ return 0;
+}
+
+static int eth_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ const int if_index = internals->n_intf_no;
+ dpdk_stats_reset(internals, p_nt_drv, if_index);
+ return 0;
+}
+
static int
eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info)
{
@@ -1453,6 +1568,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_set_link_down = eth_dev_set_link_down,
.dev_close = eth_dev_close,
.link_update = eth_link_update,
+ .stats_get = eth_stats_get,
+ .stats_reset = eth_stats_reset,
.dev_infos_get = eth_dev_infos_get,
.fw_version_get = eth_fw_version_get,
.rx_queue_setup = eth_rx_scg_queue_setup,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index a435b60fb2..ef69064f98 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -8,11 +8,19 @@
#include "create_elements.h"
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
+#include "ntos_drv.h"
#define MAX_RTE_FLOWS 8192
+#define MAX_COLOR_FLOW_STATS 0x400
#define NT_MAX_COLOR_FLOW_STATS 0x400
+#if (MAX_COLOR_FLOW_STATS != NT_MAX_COLOR_FLOW_STATS)
+#error Difference in COLOR_FLOW_STATS. Please synchronize the defines.
+#endif
+
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
@@ -681,6 +689,9 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
/* Cleanup recorded flows */
nt_flows[flow].used = 0;
nt_flows[flow].caller_id = 0;
+ nt_flows[flow].stat_bytes = 0UL;
+ nt_flows[flow].stat_pkts = 0UL;
+ nt_flows[flow].stat_tcp_flags = 0;
}
}
@@ -720,6 +731,127 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int poll_statistics(struct pmd_internals *internals)
+{
+ int flow;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t last_stat_rtc = 0;
+
+ if (!p_nt4ga_stat || if_index < 0 || if_index > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ assert(rte_tsc_freq > 0);
+
+ rte_spinlock_lock(&hwlock);
+
+ uint64_t now_rtc = rte_get_tsc_cycles();
+
+ /*
+ * Check per port max once a second
+ * if more than a second since last stat read, do a new one
+ */
+ if ((now_rtc - internals->last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ return 0;
+ }
+
+ internals->last_stat_rtc = now_rtc;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /*
+ * Add the RX statistics increments since last time we polled.
+ * (No difference if physical or virtual port)
+ */
+ internals->rxq_scg[0].rx_pkts += p_nt4ga_stat->a_port_rx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_packets_base[if_index];
+ internals->rxq_scg[0].rx_bytes += p_nt4ga_stat->a_port_rx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_octets_base[if_index];
+ internals->rxq_scg[0].err_pkts += 0;
+ internals->rx_missed += p_nt4ga_stat->a_port_rx_drops_total[if_index] -
+ p_nt4ga_stat->a_port_rx_drops_base[if_index];
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_rx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_packets_total[if_index];
+ p_nt4ga_stat->a_port_rx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_octets_total[if_index];
+ p_nt4ga_stat->a_port_rx_drops_base[if_index] =
+ p_nt4ga_stat->a_port_rx_drops_total[if_index];
+
+ /* Tx (here we must distinguish between physical and virtual ports) */
+ if (internals->type == PORT_TYPE_PHYSICAL) {
+ /* Add the statistics increments since last time we polled */
+ internals->txq_scg[0].tx_pkts += p_nt4ga_stat->a_port_tx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_packets_base[if_index];
+ internals->txq_scg[0].tx_bytes += p_nt4ga_stat->a_port_tx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_octets_base[if_index];
+ internals->txq_scg[0].err_pkts += 0;
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_tx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_packets_total[if_index];
+ p_nt4ga_stat->a_port_tx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_octets_total[if_index];
+ }
+
+ /* Globally only once a second */
+ if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return 0;
+ }
+
+ last_stat_rtc = now_rtc;
+
+ /* All color counter are global, therefore only 1 pmd must update them */
+ const struct color_counters *p_color_counters = p_nt4ga_stat->mp_stat_structs_color;
+ struct color_counters *p_color_counters_base = p_nt4ga_stat->a_stat_structs_color_base;
+ uint64_t color_packets_accumulated, color_bytes_accumulated;
+
+ for (flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used) {
+ unsigned int color = nt_flows[flow].flow_stat_id;
+
+ if (color < NT_MAX_COLOR_FLOW_STATS) {
+ color_packets_accumulated = p_color_counters[color].color_packets;
+ nt_flows[flow].stat_pkts +=
+ (color_packets_accumulated -
+ p_color_counters_base[color].color_packets);
+
+ nt_flows[flow].stat_tcp_flags |= p_color_counters[color].tcp_flags;
+
+ color_bytes_accumulated = p_color_counters[color].color_bytes;
+ nt_flows[flow].stat_bytes +=
+ (color_bytes_accumulated -
+ p_color_counters_base[color].color_bytes);
+
+ /* Update the counter bases */
+ p_color_counters_base[color].color_packets =
+ color_packets_accumulated;
+ p_color_counters_base[color].color_bytes = color_bytes_accumulated;
+ }
+ }
+ }
+
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
+static const struct ntnic_filter_ops ntnic_filter_ops = {
+ .poll_statistics = poll_statistics,
+};
+
+void ntnic_filter_init(void)
+{
+ register_ntnic_filter_ops(&ntnic_filter_ops);
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 593b56bf5b..355e2032b1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,21 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+static const struct ntnic_filter_ops *ntnic_filter_ops;
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
+{
+ ntnic_filter_ops = ops;
+}
+
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void)
+{
+ if (ntnic_filter_ops == NULL)
+ ntnic_filter_init();
+
+ return ntnic_filter_ops;
+}
+
static struct link_ops_s *link_100g_ops;
void register_100g_link_ops(struct link_ops_s *ops)
@@ -47,6 +62,21 @@ const struct port_ops *get_port_ops(void)
return port_ops;
}
+static const struct nt4ga_stat_ops *nt4ga_stat_ops;
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops)
+{
+ nt4ga_stat_ops = ops;
+}
+
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void)
+{
+ if (nt4ga_stat_ops == NULL)
+ nt4ga_stat_ops_init();
+
+ return nt4ga_stat_ops;
+}
+
static const struct adapter_ops *adapter_ops;
void register_adapter_ops(const struct adapter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e40ed9b949..30b9afb7d3 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -111,6 +111,14 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+struct ntnic_filter_ops {
+ int (*poll_statistics)(struct pmd_internals *internals);
+};
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops);
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void);
+void ntnic_filter_init(void);
+
struct link_ops_s {
int (*link_init)(struct adapter_info_s *p_adapter_info, nthw_fpga_t *p_fpga);
};
@@ -175,6 +183,15 @@ void register_port_ops(const struct port_ops *ops);
const struct port_ops *get_port_ops(void);
void port_init(void);
+struct nt4ga_stat_ops {
+ int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+};
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void);
+void nt4ga_stat_ops_init(void);
+
struct adapter_ops {
int (*init)(struct adapter_info_s *p_adapter_info);
int (*deinit)(struct adapter_info_s *p_adapter_info);
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index a482fb43ad..f2eccf3501 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -22,6 +22,7 @@
* The windows size must max be 3 min in order to
* prevent overflow.
*/
+#define PORT_LOAD_WINDOWS_SIZE 2ULL
#define FLM_LOAD_WINDOWS_SIZE 2ULL
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 55/86] net/ntnic: add rpf module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (53 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 54/86] net/ntnic: add statistics API Serhii Iliushyk
@ 2024-10-29 16:41 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 56/86] net/ntnic: add statistics poll Serhii Iliushyk
` (31 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:41 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Receive Port FIFO module controls the small FPGA FIFO
that packets are stored in before they enter the packet processor pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 25 +++-
drivers/net/ntnic/include/ntnic_stat.h | 2 +
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +++++++
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 ++++++++++++++++++
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 ++
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +++
10 files changed, 228 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 0e20f3ea45..f733fd5459 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -11,6 +11,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nim.h"
#include "flow_filter.h"
+#include "ntnic_stat.h"
#include "ntnic_mod_reg.h"
#define DEFAULT_MAX_BPS_SPEED 100e9
@@ -43,7 +44,7 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
if (!p_nthw_rmc) {
nthw_stat_delete(p_nthw_stat);
- NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ NT_LOG(ERR, NTNIC, "%s: ERROR rmc allocation", p_adapter_id_str);
return -1;
}
@@ -54,6 +55,22 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
p_nt4ga_stat->mp_nthw_rmc = NULL;
}
+ if (nthw_rpf_init(NULL, p_fpga, p_adapter_info->adapter_no) == 0) {
+ nthw_rpf_t *p_nthw_rpf = nthw_rpf_new();
+
+ if (!p_nthw_rpf) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rpf_init(p_nthw_rpf, p_fpga, p_adapter_info->adapter_no);
+ p_nt4ga_stat->mp_nthw_rpf = p_nthw_rpf;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rpf = NULL;
+ }
+
p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
nthw_stat_init(p_nthw_stat, p_fpga, 0);
@@ -77,6 +94,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_block(p_nt4ga_stat->mp_nthw_rpf);
+
/* Allocate and map memory for fpga statistics */
{
uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
@@ -112,6 +132,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_unblock(p_nt4ga_stat->mp_nthw_rpf);
+
p_nt4ga_stat->mp_stat_structs_color =
calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 2aee3f8425..ed24a892ec 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -8,6 +8,7 @@
#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_rpf.h"
#include "nthw_fpga_model.h"
#define NT_MAX_COLOR_FLOW_STATS 0x400
@@ -102,6 +103,7 @@ struct flm_counters_v1 {
struct nt4ga_stat_s {
nthw_stat_t *mp_nthw_stat;
nthw_rmc_t *mp_nthw_rmc;
+ nthw_rpf_t *mp_nthw_rpf;
struct nt_dma_s *p_stat_dma;
uint32_t *p_stat_dma_virtual;
uint32_t n_stat_size;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 216341bb11..ed5a201fd5 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_iic.c',
'nthw/core/nthw_mac_pcs.c',
'nthw/core/nthw_pcie3.c',
+ 'nthw/core/nthw_rpf.c',
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
new file mode 100644
index 0000000000..4c6c57ba55
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -0,0 +1,48 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTHW_RPF_HPP_
+#define NTHW_RPF_HPP_
+
+#include "nthw_fpga_model.h"
+#include "pthread.h"
+struct nthw_rpf {
+ nthw_fpga_t *mp_fpga;
+
+ nthw_module_t *m_mod_rpf;
+
+ int mn_instance;
+
+ nthw_register_t *mp_reg_control;
+ nthw_field_t *mp_fld_control_pen;
+ nthw_field_t *mp_fld_control_rpp_en;
+ nthw_field_t *mp_fld_control_st_tgl_en;
+ nthw_field_t *mp_fld_control_keep_alive_en;
+
+ nthw_register_t *mp_ts_sort_prg;
+ nthw_field_t *mp_fld_ts_sort_prg_maturing_delay;
+ nthw_field_t *mp_fld_ts_sort_prg_ts_at_eof;
+
+ int m_default_maturing_delay;
+ bool m_administrative_block; /* used to enforce license expiry */
+
+ pthread_mutex_t rpf_mutex;
+};
+
+typedef struct nthw_rpf nthw_rpf_t;
+typedef struct nthw_rpf nt_rpf;
+
+nthw_rpf_t *nthw_rpf_new(void);
+void nthw_rpf_delete(nthw_rpf_t *p);
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rpf_administrative_block(nthw_rpf_t *p);
+void nthw_rpf_block(nthw_rpf_t *p);
+void nthw_rpf_unblock(nthw_rpf_t *p);
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay);
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p);
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable);
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p);
+
+#endif
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
new file mode 100644
index 0000000000..81c704d01a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -0,0 +1,119 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+#include "nthw_rpf.h"
+
+nthw_rpf_t *nthw_rpf_new(void)
+{
+ nthw_rpf_t *p = malloc(sizeof(nthw_rpf_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_rpf_t));
+
+ return p;
+}
+
+void nthw_rpf_delete(nthw_rpf_t *p)
+{
+ if (p) {
+ memset(p, 0, sizeof(nthw_rpf_t));
+ free(p);
+ }
+}
+
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *p_mod = nthw_fpga_query_module(p_fpga, MOD_RPF, n_instance);
+
+ if (p == NULL)
+ return p_mod == NULL ? -1 : 0;
+
+ if (p_mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: MOD_RPF %d: no such instance",
+ p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance);
+ return -1;
+ }
+
+ p->m_mod_rpf = p_mod;
+
+ p->mp_fpga = p_fpga;
+
+ p->m_administrative_block = false;
+
+ /* CONTROL */
+ p->mp_reg_control = nthw_module_get_register(p->m_mod_rpf, RPF_CONTROL);
+ p->mp_fld_control_pen = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_PEN);
+ p->mp_fld_control_rpp_en = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_RPP_EN);
+ p->mp_fld_control_st_tgl_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_ST_TGL_EN);
+ p->mp_fld_control_keep_alive_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_KEEP_ALIVE_EN);
+
+ /* TS_SORT_PRG */
+ p->mp_ts_sort_prg = nthw_module_get_register(p->m_mod_rpf, RPF_TS_SORT_PRG);
+ p->mp_fld_ts_sort_prg_maturing_delay =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_MATURING_DELAY);
+ p->mp_fld_ts_sort_prg_ts_at_eof =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_TS_AT_EOF);
+ p->m_default_maturing_delay =
+ nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
+
+ /* Initialize mutex */
+ pthread_mutex_init(&p->rpf_mutex, NULL);
+ return 0;
+}
+
+void nthw_rpf_administrative_block(nthw_rpf_t *p)
+{
+ /* block all MAC ports */
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+
+ p->m_administrative_block = true;
+}
+
+void nthw_rpf_block(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+}
+
+void nthw_rpf_unblock(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+
+ nthw_field_set_val32(p->mp_fld_control_pen, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_rpp_en, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_st_tgl_en, 1);
+ nthw_field_set_val_flush32(p->mp_fld_control_keep_alive_en, 1);
+}
+
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_maturing_delay, (uint32_t)delay);
+}
+
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ /* Maturing delay is a two's complement 18 bit value, so we retrieve it as signed */
+ return nthw_field_get_signed(p->mp_fld_ts_sort_prg_maturing_delay);
+}
+
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_ts_at_eof, enable);
+}
+
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p)
+{
+ return nthw_field_get_updated(p->mp_fld_ts_sort_prg_ts_at_eof);
+}
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
index 4d495f5b96..9eaaeb550d 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
@@ -1050,6 +1050,18 @@ uint32_t nthw_field_get_val32(const nthw_field_t *p)
return val;
}
+int32_t nthw_field_get_signed(const nthw_field_t *p)
+{
+ uint32_t val;
+
+ nthw_field_get_val(p, &val, 1);
+
+ if (val & (1U << nthw_field_get_bit_pos_high(p))) /* check sign */
+ val = val | ~nthw_field_get_mask(p); /* sign extension */
+
+ return (int32_t)val; /* cast to signed value */
+}
+
uint32_t nthw_field_get_updated(const nthw_field_t *p)
{
uint32_t val;
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
index 7956f0689e..d4e7ab3edd 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
@@ -227,6 +227,7 @@ void nthw_field_get_val(const nthw_field_t *p, uint32_t *p_data, uint32_t len);
void nthw_field_set_val(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
void nthw_field_set_val_flush(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
uint32_t nthw_field_get_val32(const nthw_field_t *p);
+int32_t nthw_field_get_signed(const nthw_field_t *p);
uint32_t nthw_field_get_updated(const nthw_field_t *p);
void nthw_field_update_register(const nthw_field_t *p);
void nthw_field_flush_register(const nthw_field_t *p);
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index ddc144dc02..03122acaf5 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,6 +41,7 @@
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
#define MOD_RPL (0x6de535c3UL)
+#define MOD_RPF (0x8d30dcddUL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 8f196f885f..7067f4b1d0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -39,6 +39,7 @@
#include "nthw_fpga_reg_defs_qsl.h"
#include "nthw_fpga_reg_defs_rac.h"
#include "nthw_fpga_reg_defs_rmc.h"
+#include "nthw_fpga_reg_defs_rpf.h"
#include "nthw_fpga_reg_defs_rpl.h"
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
new file mode 100644
index 0000000000..72f450b85d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_RPF_
+#define _NTHW_FPGA_REG_DEFS_RPF_
+
+/* RPF */
+#define RPF_CONTROL (0x7a5bdb50UL)
+#define RPF_CONTROL_KEEP_ALIVE_EN (0x80be3ffcUL)
+#define RPF_CONTROL_PEN (0xb23137b8UL)
+#define RPF_CONTROL_RPP_EN (0xdb51f109UL)
+#define RPF_CONTROL_ST_TGL_EN (0x45a6ecfaUL)
+#define RPF_TS_SORT_PRG (0xff1d137eUL)
+#define RPF_TS_SORT_PRG_MATURING_DELAY (0x2a38e127UL)
+#define RPF_TS_SORT_PRG_TS_AT_EOF (0x9f27d433UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_RPF_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 56/86] net/ntnic: add statistics poll
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (54 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 55/86] net/ntnic: add rpf module Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 57/86] net/ntnic: added flm stat interface Serhii Iliushyk
` (30 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Mechanism which poll statistics module and update values with dma
module.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 343 ++++++++++++++++++
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 78 ++++
.../net/ntnic/nthw/core/include/nthw_rmc.h | 5 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 20 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 1 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 128 +++++++
drivers/net/ntnic/ntnic_ethdev.c | 143 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 +
9 files changed, 721 insertions(+)
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index f733fd5459..3afc5b7853 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -16,6 +16,27 @@
#define DEFAULT_MAX_BPS_SPEED 100e9
+/* Inline timestamp format s pcap 32:32 bits. Convert to nsecs */
+static inline uint64_t timestamp2ns(uint64_t ts)
+{
+ return ((ts) >> 32) * 1000000000 + ((ts) & 0xffffffff);
+}
+
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual);
+
+static int nt4ga_stat_collect(struct adapter_info_s *p_adapter_info, nt4ga_stat_t *p_nt4ga_stat)
+{
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ p_nt4ga_stat->last_timestamp = timestamp2ns(*p_nthw_stat->mp_timestamp);
+ nt4ga_stat_collect_cap_v1_stats(p_adapter_info, p_nt4ga_stat,
+ p_nt4ga_stat->p_stat_dma_virtual);
+
+ return 0;
+}
+
static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
{
const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
@@ -203,9 +224,331 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return 0;
}
+/* Called with stat mutex locked */
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual)
+{
+ (void)p_adapter_info;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL)
+ return -1;
+
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
+ const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
+ int c, h, p;
+
+ if (!p_nthw_stat || !p_nt4ga_stat)
+ return -1;
+
+ if (p_nthw_stat->mn_stat_layout_version < 6) {
+ NT_LOG(ERR, NTNIC, "HW STA module version not supported");
+ return -1;
+ }
+
+ /* RX ports */
+ for (c = 0; c < p_nthw_stat->m_nb_color_counters / 2; c++) {
+ p_nt4ga_stat->mp_stat_structs_color[c].color_packets += p_stat_dma_virtual[c * 2];
+ p_nt4ga_stat->mp_stat_structs_color[c].color_bytes +=
+ p_stat_dma_virtual[c * 2 + 1];
+ }
+
+ /* Move to Host buffer counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_color_counters;
+
+ for (h = 0; h < p_nthw_stat->m_nb_rx_host_buffers; h++) {
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_packets += p_stat_dma_virtual[h * 8];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_packets += p_stat_dma_virtual[h * 8 + 1];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_packets += p_stat_dma_virtual[h * 8 + 2];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_packets +=
+ p_stat_dma_virtual[h * 8 + 3];
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_bytes += p_stat_dma_virtual[h * 8 + 4];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_bytes += p_stat_dma_virtual[h * 8 + 5];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_bytes += p_stat_dma_virtual[h * 8 + 6];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_bytes +=
+ p_stat_dma_virtual[h * 8 + 7];
+ }
+
+ /* Move to Rx Port counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_rx_hb_counters;
+
+ /* RX ports */
+ for (p = 0; p < n_rx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 23];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].duplicate +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 24];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_ip_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 25];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_udp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 26];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_tcp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 27];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_giant_undersize +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 28];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_baby_giant +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 29];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_not_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 30];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 31];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 32];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 33];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 34];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 35];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 36];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 37];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 43];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 44];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 45];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 46];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 47]
+ : 0;
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 48];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 49];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 50];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 51];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 52];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 53];
+
+ /* Rx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41] +
+ (p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0);
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_rx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+ p_nt4ga_stat->a_port_rx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_rx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Move to Tx Port counters */
+ p_stat_dma_virtual += n_rx_ports * p_nthw_stat->m_nb_rx_port_counters;
+
+ for (p = 0; p < n_tx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 23];
+
+ /* Tx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_tx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+ p_nt4ga_stat->a_port_tx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_tx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Update and get port load counters */
+ for (p = 0; p < n_rx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ for (p = 0; p < n_tx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ return 0;
+}
+
static struct nt4ga_stat_ops ops = {
.nt4ga_stat_init = nt4ga_stat_init,
.nt4ga_stat_setup = nt4ga_stat_setup,
+ .nt4ga_stat_collect = nt4ga_stat_collect
};
void nt4ga_stat_ops_init(void)
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 1135e9a539..38e4d0ca35 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -16,6 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
+ rte_thread_t stat_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index ed24a892ec..0735dbc085 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -85,16 +85,87 @@ struct color_counters {
};
struct host_buffer_counters {
+ uint64_t flush_packets;
+ uint64_t drop_packets;
+ uint64_t fwd_packets;
+ uint64_t dbs_drop_packets;
+ uint64_t flush_bytes;
+ uint64_t drop_bytes;
+ uint64_t fwd_bytes;
+ uint64_t dbs_drop_bytes;
};
struct port_load_counters {
+ uint64_t rx_pps;
uint64_t rx_pps_max;
+ uint64_t tx_pps;
uint64_t tx_pps_max;
+ uint64_t rx_bps;
uint64_t rx_bps_max;
+ uint64_t tx_bps;
uint64_t tx_bps_max;
};
struct port_counters_v2 {
+ /* Rx/Tx common port counters */
+ uint64_t drop_events;
+ uint64_t pkts;
+ /* FPGA counters */
+ uint64_t octets;
+ uint64_t broadcast_pkts;
+ uint64_t multicast_pkts;
+ uint64_t unicast_pkts;
+ uint64_t pkts_alignment;
+ uint64_t pkts_code_violation;
+ uint64_t pkts_crc;
+ uint64_t undersize_pkts;
+ uint64_t oversize_pkts;
+ uint64_t fragments;
+ uint64_t jabbers_not_truncated;
+ uint64_t jabbers_truncated;
+ uint64_t pkts_64_octets;
+ uint64_t pkts_65_to_127_octets;
+ uint64_t pkts_128_to_255_octets;
+ uint64_t pkts_256_to_511_octets;
+ uint64_t pkts_512_to_1023_octets;
+ uint64_t pkts_1024_to_1518_octets;
+ uint64_t pkts_1519_to_2047_octets;
+ uint64_t pkts_2048_to_4095_octets;
+ uint64_t pkts_4096_to_8191_octets;
+ uint64_t pkts_8192_to_max_octets;
+ uint64_t mac_drop_events;
+ uint64_t pkts_lr;
+ /* Rx only port counters */
+ uint64_t duplicate;
+ uint64_t pkts_ip_chksum_error;
+ uint64_t pkts_udp_chksum_error;
+ uint64_t pkts_tcp_chksum_error;
+ uint64_t pkts_giant_undersize;
+ uint64_t pkts_baby_giant;
+ uint64_t pkts_not_isl_vlan_mpls;
+ uint64_t pkts_isl;
+ uint64_t pkts_vlan;
+ uint64_t pkts_isl_vlan;
+ uint64_t pkts_mpls;
+ uint64_t pkts_isl_mpls;
+ uint64_t pkts_vlan_mpls;
+ uint64_t pkts_isl_vlan_mpls;
+ uint64_t pkts_no_filter;
+ uint64_t pkts_dedup_drop;
+ uint64_t pkts_filter_drop;
+ uint64_t pkts_overflow;
+ uint64_t pkts_dbs_drop;
+ uint64_t octets_no_filter;
+ uint64_t octets_dedup_drop;
+ uint64_t octets_filter_drop;
+ uint64_t octets_overflow;
+ uint64_t octets_dbs_drop;
+ uint64_t ipft_first_hit;
+ uint64_t ipft_first_not_hit;
+ uint64_t ipft_mid_hit;
+ uint64_t ipft_mid_not_hit;
+ uint64_t ipft_last_hit;
+ uint64_t ipft_last_not_hit;
};
struct flm_counters_v1 {
@@ -147,6 +218,8 @@ struct nt4ga_stat_s {
uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_drops_total[NUM_ADAPTER_PORTS_MAX];
};
typedef struct nt4ga_stat_s nt4ga_stat_t;
@@ -159,4 +232,9 @@ int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
uint32_t *p_stat_dma_virtual);
int nthw_stat_trigger(nthw_stat_t *p);
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index b239752674..9c40804cd9 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -47,4 +47,9 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p);
+
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 748519aeb4..570a179fc8 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,26 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_sf_ram_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_descr_fifo_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p)
+{
+ return (p->mp_reg_dbg) ? nthw_field_get_updated(p->mp_fld_dbg_merge) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p)
+{
+ return (p->mp_reg_mac_if) ? nthw_field_get_updated(p->mp_fld_mac_if_err) : 0xffffffff;
+}
+
void nthw_rmc_block(nthw_rmc_t *p)
{
/* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d61044402d..aac3144cc0 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
+#include "ntlog.h"
#include "ntnic_mod_reg.h"
#include "flow_api.h"
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
index 6adcd2e090..078eec5e1f 100644
--- a/drivers/net/ntnic/nthw/stat/nthw_stat.c
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -368,3 +368,131 @@ int nthw_stat_trigger(nthw_stat_t *p)
return 0;
}
+
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 3d02e79691..8a9ca2c03d 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -4,6 +4,9 @@
*/
#include <stdint.h>
+#include <stdarg.h>
+
+#include <signal.h>
#include <rte_eal.h>
#include <rte_dev.h>
@@ -25,6 +28,7 @@
#include "nt_util.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
#define THREAD_JOIN(a) rte_thread_join(a, NULL)
#define THREAD_FUNC static uint32_t
@@ -67,6 +71,9 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
uint64_t rte_tsc_freq;
+static void (*previous_handler)(int sig);
+static rte_thread_t shutdown_tid;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -1407,6 +1414,7 @@ drv_deinit(struct drv_s *p_drv)
/* stop statistics threads */
p_drv->ntdrv.b_shutdown = true;
+ THREAD_JOIN(p_nt_drv->stat_thread);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
@@ -1626,6 +1634,87 @@ THREAD_FUNC adapter_flm_update_thread_fn(void *context)
return THREAD_RETURN;
}
+/*
+ * Adapter stat thread
+ */
+THREAD_FUNC adapter_stat_thread_fn(void *context)
+{
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
+
+ if (nt4ga_stat_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "Statistics module uninitialized");
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const char *const p_adapter_id_str = p_nt_drv->adapter_info.mp_adapter_id_str;
+ (void)p_adapter_id_str;
+
+ if (!p_nthw_stat)
+ return THREAD_RETURN;
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: begin", p_adapter_id_str);
+
+ assert(p_nthw_stat);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ nt_os_wait_usec(10 * 1000);
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ uint32_t loop = 0;
+
+ while ((!p_drv->ntdrv.b_shutdown) &&
+ (*p_nthw_stat->mp_timestamp == (uint64_t)-1)) {
+ nt_os_wait_usec(1 * 100);
+
+ if (rte_log_get_level(nt_log_ntnic) == RTE_LOG_DEBUG &&
+ (++loop & 0x3fff) == 0) {
+ if (p_nt4ga_stat->mp_nthw_rpf) {
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+
+ } else if (p_nt4ga_stat->mp_nthw_rmc) {
+ uint32_t sf_ram_of =
+ nthw_rmc_get_status_sf_ram_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+ uint32_t descr_fifo_of =
+ nthw_rmc_get_status_descr_fifo_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+
+ uint32_t dbg_merge =
+ nthw_rmc_get_dbg_merge(p_nt4ga_stat->mp_nthw_rmc);
+ uint32_t mac_if_err =
+ nthw_rmc_get_mac_if_err(p_nt4ga_stat->mp_nthw_rmc);
+
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+ NT_LOG(ERR, NTNIC, "SF RAM Overflow : %08x",
+ sf_ram_of);
+ NT_LOG(ERR, NTNIC, "Descr Fifo Overflow : %08x",
+ descr_fifo_of);
+ NT_LOG(ERR, NTNIC, "DBG Merge : %08x",
+ dbg_merge);
+ NT_LOG(ERR, NTNIC, "MAC If Errors : %08x",
+ mac_if_err);
+ }
+ }
+ }
+
+ /* Check then collect */
+ {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ }
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: end", p_adapter_id_str);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1883,6 +1972,16 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
+ pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
+ (void *)p_drv);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
@@ -2073,6 +2172,48 @@ nthw_pci_dev_deinit(struct rte_eth_dev *eth_dev __rte_unused)
return 0;
}
+static void signal_handler_func_int(int sig)
+{
+ if (sig != SIGINT) {
+ signal(sig, previous_handler);
+ raise(sig);
+ return;
+ }
+
+ kill_pmd = 1;
+}
+
+THREAD_FUNC shutdown_thread(void *arg __rte_unused)
+{
+ while (!kill_pmd)
+ nt_os_wait_usec(100 * 1000);
+
+ NT_LOG_DBGX(DBG, NTNIC, "Shutting down because of ctrl+C");
+
+ signal(SIGINT, previous_handler);
+ raise(SIGINT);
+
+ return THREAD_RETURN;
+}
+
+static int init_shutdown(void)
+{
+ NT_LOG(DBG, NTNIC, "Starting shutdown handler");
+ kill_pmd = 0;
+ previous_handler = signal(SIGINT, signal_handler_func_int);
+ THREAD_CREATE(&shutdown_tid, shutdown_thread, NULL);
+
+ /*
+ * 1 time calculation of 1 sec stat update rtc cycles to prevent stat poll
+ * flooding by OVS from multiple virtual port threads - no need to be precise
+ */
+ uint64_t now_rtc = rte_get_tsc_cycles();
+ nt_os_wait_usec(10 * 1000);
+ rte_tsc_freq = 100 * (rte_get_tsc_cycles() - now_rtc);
+
+ return 0;
+}
+
static int
nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
@@ -2115,6 +2256,8 @@ nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
ret = nthw_pci_dev_init(pci_dev);
+ init_shutdown();
+
NT_LOG_DBGX(DBG, NTNIC, "leave: ret=%d", ret);
return ret;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 30b9afb7d3..8b825d8c48 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -186,6 +186,8 @@ void port_init(void);
struct nt4ga_stat_ops {
int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_collect)(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat);
};
void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 57/86] net/ntnic: added flm stat interface
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (55 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 56/86] net/ntnic: add statistics poll Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 58/86] net/ntnic: add tsm module Serhii Iliushyk
` (29 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flm stat module interface was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 2 ++
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 ++
4 files changed, 16 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 4a1525f237..ed96f77bc0 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -233,4 +233,6 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_filter.h b/drivers/net/ntnic/include/flow_filter.h
index d204c0d882..01777f8c9f 100644
--- a/drivers/net/ntnic/include/flow_filter.h
+++ b/drivers/net/ntnic/include/flow_filter.h
@@ -11,5 +11,6 @@
int flow_filter_init(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device, int adapter_no);
int flow_filter_done(struct flow_nic_dev *dev);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
#endif /* __FLOW_FILTER_HPP__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index aac3144cc0..e953fc1a12 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,16 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ (void)ndev;
+ (void)data;
+ (void)size;
+
+ NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
+ return -1;
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
@@ -1062,6 +1072,7 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+ .flow_get_flm_stats = flow_get_flm_stats,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8b825d8c48..8703d478b6 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -336,6 +336,8 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
/*
* Other
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 58/86] net/ntnic: add tsm module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (56 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 57/86] net/ntnic: added flm stat interface Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 59/86] net/ntnic: add STA module Serhii Iliushyk
` (28 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
tsm module which operate with timers
in the physical nic was added.
Necessary defines and implementation were added.
The Time Stamp Module controls every aspect of packet timestamping,
including time synchronization, time stamp format, PTP protocol, etc.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 ++++++
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +++++
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 28 +++
7 files changed, 301 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index ed5a201fd5..a6c4fec0be 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -41,6 +41,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
'nthw/core/nthw_gmf.c',
+ 'nthw/core/nthw_tsm.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_tsm.h b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
new file mode 100644
index 0000000000..0a3bcdcaf5
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_TSM_H__
+#define __NTHW_TSM_H__
+
+#include "stdint.h"
+
+#include "nthw_fpga_model.h"
+
+struct nthw_tsm {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_tsm;
+ int mn_instance;
+
+ nthw_field_t *mp_fld_config_ts_format;
+
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t0;
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t1;
+
+ nthw_field_t *mp_fld_timer_timer_t0_max_count;
+
+ nthw_field_t *mp_fld_timer_timer_t1_max_count;
+
+ nthw_register_t *mp_reg_ts_lo;
+ nthw_field_t *mp_fld_ts_lo;
+
+ nthw_register_t *mp_reg_ts_hi;
+ nthw_field_t *mp_fld_ts_hi;
+
+ nthw_register_t *mp_reg_time_lo;
+ nthw_field_t *mp_fld_time_lo;
+
+ nthw_register_t *mp_reg_time_hi;
+ nthw_field_t *mp_fld_time_hi;
+};
+
+typedef struct nthw_tsm nthw_tsm_t;
+typedef struct nthw_tsm nthw_tsm;
+
+nthw_tsm_t *nthw_tsm_new(void);
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts);
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time);
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val);
+
+#endif /* __NTHW_TSM_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_fpga.c b/drivers/net/ntnic/nthw/core/nthw_fpga.c
index 9448c29de1..ca69a9d5b1 100644
--- a/drivers/net/ntnic/nthw/core/nthw_fpga.c
+++ b/drivers/net/ntnic/nthw/core/nthw_fpga.c
@@ -13,6 +13,8 @@
#include "nthw_fpga_instances.h"
#include "nthw_fpga_mod_str_map.h"
+#include "nthw_tsm.h"
+
#include <arpa/inet.h>
int nthw_fpga_get_param_info(struct fpga_info_s *p_fpga_info, nthw_fpga_t *p_fpga)
@@ -179,6 +181,7 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
nthw_hif_t *p_nthw_hif = NULL;
nthw_pcie3_t *p_nthw_pcie3 = NULL;
nthw_rac_t *p_nthw_rac = NULL;
+ nthw_tsm_t *p_nthw_tsm = NULL;
mcu_info_t *p_mcu_info = &p_fpga_info->mcu_info;
uint64_t n_fpga_ident = 0;
@@ -331,6 +334,50 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
p_fpga_info->mp_nthw_hif = p_nthw_hif;
+ p_nthw_tsm = nthw_tsm_new();
+
+ if (p_nthw_tsm) {
+ nthw_tsm_init(p_nthw_tsm, p_fpga, 0);
+
+ nthw_tsm_set_config_ts_format(p_nthw_tsm, 1); /* 1 = TSM: TS format native */
+
+ /* Timer T0 - stat toggle timer */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t0_max_count(p_nthw_tsm, 50 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, true);
+
+ /* Timer T1 - keep alive timer */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t1_max_count(p_nthw_tsm, 100 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, true);
+ }
+
+ p_fpga_info->mp_nthw_tsm = p_nthw_tsm;
+
+ /* TSM sample triggering: test validation... */
+#if defined(DEBUG) && (1)
+ {
+ uint64_t n_time, n_ts;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (p_nthw_hif)
+ nthw_hif_trigger_sample_time(p_nthw_hif);
+
+ else if (p_nthw_pcie3)
+ nthw_pcie3_trigger_sample_time(p_nthw_pcie3);
+
+ nthw_tsm_get_time(p_nthw_tsm, &n_time);
+ nthw_tsm_get_ts(p_nthw_tsm, &n_ts);
+
+ NT_LOG(DBG, NTHW, "%s: TSM time: %016" PRIX64 " %016" PRIX64 "\n",
+ p_adapter_id_str, n_time, n_ts);
+
+ nt_os_wait_usec(1000);
+ }
+ }
+#endif
+
return res;
}
diff --git a/drivers/net/ntnic/nthw/core/nthw_tsm.c b/drivers/net/ntnic/nthw/core/nthw_tsm.c
new file mode 100644
index 0000000000..b88dcb9b0b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_tsm.c
@@ -0,0 +1,167 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_tsm.h"
+
+nthw_tsm_t *nthw_tsm_new(void)
+{
+ nthw_tsm_t *p = malloc(sizeof(nthw_tsm_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_tsm_t));
+
+ return p;
+}
+
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_TSM, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: TSM %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_tsm = mod;
+
+ {
+ nthw_register_t *p_reg;
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_CONFIG);
+ p->mp_fld_config_ts_format = nthw_register_get_field(p_reg, TSM_CONFIG_TS_FORMAT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_CTRL);
+ p->mp_fld_timer_ctrl_timer_en_t0 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T0);
+ p->mp_fld_timer_ctrl_timer_en_t1 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T1);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T0);
+ p->mp_fld_timer_timer_t0_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T0_MAX_COUNT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T1);
+ p->mp_fld_timer_timer_t1_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T1_MAX_COUNT);
+
+ p->mp_reg_time_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_LO);
+ p_reg = p->mp_reg_time_lo;
+ p->mp_fld_time_lo = nthw_register_get_field(p_reg, TSM_TIME_LO_NS);
+
+ p->mp_reg_time_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_HI);
+ p_reg = p->mp_reg_time_hi;
+ p->mp_fld_time_hi = nthw_register_get_field(p_reg, TSM_TIME_HI_SEC);
+
+ p->mp_reg_ts_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_LO);
+ p_reg = p->mp_reg_ts_lo;
+ p->mp_fld_ts_lo = nthw_register_get_field(p_reg, TSM_TS_LO_TIME);
+
+ p->mp_reg_ts_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_HI);
+ p_reg = p->mp_reg_ts_hi;
+ p->mp_fld_ts_hi = nthw_register_get_field(p_reg, TSM_TS_HI_TIME);
+ }
+ return 0;
+}
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts)
+{
+ uint32_t n_ts_lo, n_ts_hi;
+ uint64_t val;
+
+ if (!p_ts)
+ return -1;
+
+ n_ts_lo = nthw_field_get_updated(p->mp_fld_ts_lo);
+ n_ts_hi = nthw_field_get_updated(p->mp_fld_ts_hi);
+
+ val = ((((uint64_t)n_ts_hi) << 32UL) | n_ts_lo);
+
+ if (p_ts)
+ *p_ts = val;
+
+ return 0;
+}
+
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time)
+{
+ uint32_t n_time_lo, n_time_hi;
+ uint64_t val;
+
+ if (!p_time)
+ return -1;
+
+ n_time_lo = nthw_field_get_updated(p->mp_fld_time_lo);
+ n_time_hi = nthw_field_get_updated(p->mp_fld_time_hi);
+
+ val = ((((uint64_t)n_time_hi) << 32UL) | n_time_lo);
+
+ if (p_time)
+ *p_time = val;
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T0 - stat toggle timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t0_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t0_max_count,
+ n_timer_val); /* ns (50*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T1 - keep alive timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t1_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t1_max_count,
+ n_timer_val); /* ns (100*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val)
+{
+ nthw_field_update_register(p->mp_fld_config_ts_format);
+ /* 0x1: Native - 10ns units, start date: 1970-01-01. */
+ nthw_field_set_val_flush32(p->mp_fld_config_ts_format, n_val);
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 03122acaf5..e6ed9e714b 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -48,6 +48,7 @@
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_STA (0x76fae64dUL)
+#define MOD_TSM (0x35422a24UL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7067f4b1d0..4d299c6aa8 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -44,6 +44,7 @@
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
#include "nthw_fpga_reg_defs_sdc.h"
+#include "nthw_fpga_reg_defs_tsm.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
#include "nthw_fpga_reg_defs_sta.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
new file mode 100644
index 0000000000..a087850aa4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_TSM_
+#define _NTHW_FPGA_REG_DEFS_TSM_
+
+/* TSM */
+#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_TIMER_CTRL (0x648da051UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
+#define TSM_TIMER_T0 (0x417217a5UL)
+#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
+#define TSM_TIMER_T1 (0x36752733UL)
+#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HI (0x175acea1UL)
+#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
+#define TSM_TIME_LO (0x9a55ae90UL)
+#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TS_HI (0xccfe9e5eUL)
+#define TSM_TS_HI_TIME (0xc23fed30UL)
+#define TSM_TS_LO (0x41f1fe6fUL)
+#define TSM_TS_LO_TIME (0xe0292a3eUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 59/86] net/ntnic: add STA module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (57 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 58/86] net/ntnic: add tsm module Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 60/86] net/ntnic: add TSM module Serhii Iliushyk
` (27 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with STA module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 92 ++++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 8 ++
3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index a3d9f94fc6..efdb084cd6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2486,6 +2486,95 @@ static nthw_fpga_register_init_s slc_registers[] = {
{ SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
};
+static nthw_fpga_field_init_s sta_byte_fields[] = {
+ { STA_BYTE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_cfg_fields[] = {
+ { STA_CFG_CNT_CLEAR, 1, 1, 0 },
+ { STA_CFG_DMA_ENA, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_cv_err_fields[] = {
+ { STA_CV_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_fcs_err_fields[] = {
+ { STA_FCS_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_lsb_fields[] = {
+ { STA_HOST_ADR_LSB_LSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_msb_fields[] = {
+ { STA_HOST_ADR_MSB_MSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_load_bin_fields[] = {
+ { STA_LOAD_BIN_BIN, 32, 0, 8388607 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_0_fields[] = {
+ { STA_LOAD_BPS_RX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_1_fields[] = {
+ { STA_LOAD_BPS_RX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_0_fields[] = {
+ { STA_LOAD_BPS_TX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_1_fields[] = {
+ { STA_LOAD_BPS_TX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_0_fields[] = {
+ { STA_LOAD_PPS_RX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_1_fields[] = {
+ { STA_LOAD_PPS_RX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_0_fields[] = {
+ { STA_LOAD_PPS_TX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_1_fields[] = {
+ { STA_LOAD_PPS_TX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_pckt_fields[] = {
+ { STA_PCKT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_status_fields[] = {
+ { STA_STATUS_STAT_TOGGLE_MISSED, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s sta_registers[] = {
+ { STA_BYTE, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_byte_fields },
+ { STA_CFG, 0, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, sta_cfg_fields },
+ { STA_CV_ERR, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_cv_err_fields },
+ { STA_FCS_ERR, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_fcs_err_fields },
+ { STA_HOST_ADR_LSB, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_lsb_fields },
+ { STA_HOST_ADR_MSB, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_msb_fields },
+ { STA_LOAD_BIN, 8, 32, NTHW_FPGA_REG_TYPE_WO, 8388607, 1, sta_load_bin_fields },
+ { STA_LOAD_BPS_RX_0, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_0_fields },
+ { STA_LOAD_BPS_RX_1, 13, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_1_fields },
+ { STA_LOAD_BPS_TX_0, 15, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_0_fields },
+ { STA_LOAD_BPS_TX_1, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_1_fields },
+ { STA_LOAD_PPS_RX_0, 10, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_0_fields },
+ { STA_LOAD_PPS_RX_1, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_1_fields },
+ { STA_LOAD_PPS_TX_0, 14, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_0_fields },
+ { STA_LOAD_PPS_TX_1, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_1_fields },
+ { STA_PCKT, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_pckt_fields },
+ { STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2537,6 +2626,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
+ { MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2695,5 +2785,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index 150b9dd976..a2ab266931 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -19,5 +19,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RAC, "RAC" },
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
+ { MOD_STA, "STA" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
index 640ffcbc52..0cd183fcaa 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -7,11 +7,17 @@
#define _NTHW_FPGA_REG_DEFS_STA_
/* STA */
+#define STA_BYTE (0xa08364d4UL)
+#define STA_BYTE_CNT (0x3119e6bcUL)
#define STA_CFG (0xcecaf9f4UL)
#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
#define STA_CFG_CNT_FRZ (0x8c27a596UL)
#define STA_CFG_DMA_ENA (0x940dbacUL)
#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_CV_ERR (0x7db7db5dUL)
+#define STA_CV_ERR_CNT (0x2c02fbbeUL)
+#define STA_FCS_ERR (0xa0de1647UL)
+#define STA_FCS_ERR_CNT (0xc68c37d1UL)
#define STA_HOST_ADR_LSB (0xde569336UL)
#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
#define STA_HOST_ADR_MSB (0xdf94f901UL)
@@ -34,6 +40,8 @@
#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_PCKT (0xecc8f30aUL)
+#define STA_PCKT_CNT (0x63291d16UL)
#define STA_STATUS (0x91c5c51cUL)
#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 60/86] net/ntnic: add TSM module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (58 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 59/86] net/ntnic: add STA module Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 61/86] net/ntnic: add xstats Serhii Iliushyk
` (26 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with tsm module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../supported/nthw_fpga_9563_055_049_0000.c | 394 +++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 177 ++++++++
4 files changed, 572 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e5d5abd0ed..64351bcdc7 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,6 +12,7 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
+Basic stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efdb084cd6..620968ceb6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2575,6 +2575,397 @@ static nthw_fpga_register_init_s sta_registers[] = {
{ STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
};
+static nthw_fpga_field_init_s tsm_con0_config_fields[] = {
+ { TSM_CON0_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON0_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON0_CONFIG_PORT, 3, 0, 0 }, { TSM_CON0_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON0_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_interface_fields[] = {
+ { TSM_CON0_INTERFACE_EX_TERM, 2, 0, 3 }, { TSM_CON0_INTERFACE_IN_REF_PWM, 8, 12, 128 },
+ { TSM_CON0_INTERFACE_PWM_ENA, 1, 2, 0 }, { TSM_CON0_INTERFACE_RESERVED, 1, 3, 0 },
+ { TSM_CON0_INTERFACE_VTERM_PWM, 8, 4, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_hi_fields[] = {
+ { TSM_CON0_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_lo_fields[] = {
+ { TSM_CON0_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_config_fields[] = {
+ { TSM_CON1_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON1_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON1_CONFIG_PORT, 3, 0, 0 }, { TSM_CON1_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON1_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_hi_fields[] = {
+ { TSM_CON1_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_lo_fields[] = {
+ { TSM_CON1_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_config_fields[] = {
+ { TSM_CON2_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON2_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON2_CONFIG_PORT, 3, 0, 0 }, { TSM_CON2_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON2_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_hi_fields[] = {
+ { TSM_CON2_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_lo_fields[] = {
+ { TSM_CON2_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_config_fields[] = {
+ { TSM_CON3_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON3_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON3_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_hi_fields[] = {
+ { TSM_CON3_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_lo_fields[] = {
+ { TSM_CON3_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_config_fields[] = {
+ { TSM_CON4_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON4_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON4_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_hi_fields[] = {
+ { TSM_CON4_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_lo_fields[] = {
+ { TSM_CON4_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_config_fields[] = {
+ { TSM_CON5_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON5_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON5_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_hi_fields[] = {
+ { TSM_CON5_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_lo_fields[] = {
+ { TSM_CON5_SAMPLE_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_config_fields[] = {
+ { TSM_CON6_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON6_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON6_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_hi_fields[] = {
+ { TSM_CON6_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_lo_fields[] = {
+ { TSM_CON6_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_hi_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_lo_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_config_fields[] = {
+ { TSM_CONFIG_NTTS_SRC, 2, 5, 0 }, { TSM_CONFIG_NTTS_SYNC, 1, 4, 0 },
+ { TSM_CONFIG_TIMESET_EDGE, 2, 8, 1 }, { TSM_CONFIG_TIMESET_SRC, 3, 10, 0 },
+ { TSM_CONFIG_TIMESET_UP, 1, 7, 0 }, { TSM_CONFIG_TS_FORMAT, 4, 0, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_int_config_fields[] = {
+ { TSM_INT_CONFIG_AUTO_DISABLE, 1, 0, 0 },
+ { TSM_INT_CONFIG_MASK, 19, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_int_stat_fields[] = {
+ { TSM_INT_STAT_CAUSE, 19, 1, 0 },
+ { TSM_INT_STAT_ENABLE, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_led_fields[] = {
+ { TSM_LED_LED0_BG_COLOR, 2, 3, 0 }, { TSM_LED_LED0_COLOR, 2, 1, 0 },
+ { TSM_LED_LED0_MODE, 1, 0, 0 }, { TSM_LED_LED0_SRC, 4, 5, 0 },
+ { TSM_LED_LED1_BG_COLOR, 2, 12, 0 }, { TSM_LED_LED1_COLOR, 2, 10, 0 },
+ { TSM_LED_LED1_MODE, 1, 9, 0 }, { TSM_LED_LED1_SRC, 4, 14, 1 },
+ { TSM_LED_LED2_BG_COLOR, 2, 21, 0 }, { TSM_LED_LED2_COLOR, 2, 19, 0 },
+ { TSM_LED_LED2_MODE, 1, 18, 0 }, { TSM_LED_LED2_SRC, 4, 23, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_config_fields[] = {
+ { TSM_NTTS_CONFIG_AUTO_HARDSET, 1, 5, 1 },
+ { TSM_NTTS_CONFIG_EXT_CLK_ADJ, 1, 6, 0 },
+ { TSM_NTTS_CONFIG_HIGH_SAMPLE, 1, 4, 0 },
+ { TSM_NTTS_CONFIG_TS_SRC_FORMAT, 4, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ext_stat_fields[] = {
+ { TSM_NTTS_EXT_STAT_MASTER_ID, 8, 16, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_REV, 8, 24, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_STAT, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_hi_fields[] = {
+ { TSM_NTTS_LIMIT_HI_SEC, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_lo_fields[] = {
+ { TSM_NTTS_LIMIT_LO_NS, 32, 0, 100000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_offset_fields[] = {
+ { TSM_NTTS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_hi_fields[] = {
+ { TSM_NTTS_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_lo_fields[] = {
+ { TSM_NTTS_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_stat_fields[] = {
+ { TSM_NTTS_STAT_NTTS_VALID, 1, 0, 0 },
+ { TSM_NTTS_STAT_SIGNAL_LOST, 8, 1, 0 },
+ { TSM_NTTS_STAT_SYNC_LOST, 8, 9, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_hi_fields[] = {
+ { TSM_NTTS_TS_T0_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_lo_fields[] = {
+ { TSM_NTTS_TS_T0_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_offset_fields[] = {
+ { TSM_NTTS_TS_T0_OFFSET_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_ctrl_fields[] = {
+ { TSM_PB_CTRL_INSTMEM_WR, 1, 1, 0 },
+ { TSM_PB_CTRL_RST, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_instmem_fields[] = {
+ { TSM_PB_INSTMEM_MEM_ADDR, 14, 0, 0 },
+ { TSM_PB_INSTMEM_MEM_DATA, 18, 14, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_i_fields[] = {
+ { TSM_PI_CTRL_I_VAL, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_ki_fields[] = {
+ { TSM_PI_CTRL_KI_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_kp_fields[] = {
+ { TSM_PI_CTRL_KP_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_shl_fields[] = {
+ { TSM_PI_CTRL_SHL_VAL, 4, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_stat_fields[] = {
+ { TSM_STAT_HARD_SYNC, 8, 8, 0 }, { TSM_STAT_LINK_CON0, 1, 0, 0 },
+ { TSM_STAT_LINK_CON1, 1, 1, 0 }, { TSM_STAT_LINK_CON2, 1, 2, 0 },
+ { TSM_STAT_LINK_CON3, 1, 3, 0 }, { TSM_STAT_LINK_CON4, 1, 4, 0 },
+ { TSM_STAT_LINK_CON5, 1, 5, 0 }, { TSM_STAT_NTTS_INSYNC, 1, 6, 0 },
+ { TSM_STAT_PTP_MI_PRESENT, 1, 7, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_ctrl_fields[] = {
+ { TSM_TIMER_CTRL_TIMER_EN_T0, 1, 0, 0 },
+ { TSM_TIMER_CTRL_TIMER_EN_T1, 1, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t0_fields[] = {
+ { TSM_TIMER_T0_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t1_fields[] = {
+ { TSM_TIMER_T1_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_hi_fields[] = {
+ { TSM_TIME_HARDSET_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_lo_fields[] = {
+ { TSM_TIME_HARDSET_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hi_fields[] = {
+ { TSM_TIME_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_lo_fields[] = {
+ { TSM_TIME_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_rate_adj_fields[] = {
+ { TSM_TIME_RATE_ADJ_FRACTION, 29, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_hi_fields[] = {
+ { TSM_TS_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_lo_fields[] = {
+ { TSM_TS_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_offset_fields[] = {
+ { TSM_TS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_fields[] = {
+ { TSM_TS_STAT_OVERRUN, 1, 16, 0 },
+ { TSM_TS_STAT_SAMPLES, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_hi_offset_fields[] = {
+ { TSM_TS_STAT_HI_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_lo_offset_fields[] = {
+ { TSM_TS_STAT_LO_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_hi_fields[] = {
+ { TSM_TS_STAT_TAR_HI_SEC, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_lo_fields[] = {
+ { TSM_TS_STAT_TAR_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x_fields[] = {
+ { TSM_TS_STAT_X_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_hi_fields[] = {
+ { TSM_TS_STAT_X2_HI_NS, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_lo_fields[] = {
+ { TSM_TS_STAT_X2_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_utc_offset_fields[] = {
+ { TSM_UTC_OFFSET_SEC, 8, 0, 0 },
+};
+
+static nthw_fpga_register_init_s tsm_registers[] = {
+ { TSM_CON0_CONFIG, 24, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con0_config_fields },
+ {
+ TSM_CON0_INTERFACE, 25, 20, NTHW_FPGA_REG_TYPE_RW, 524291, 5,
+ tsm_con0_interface_fields
+ },
+ { TSM_CON0_SAMPLE_HI, 27, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_hi_fields },
+ { TSM_CON0_SAMPLE_LO, 26, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_lo_fields },
+ { TSM_CON1_CONFIG, 28, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con1_config_fields },
+ { TSM_CON1_SAMPLE_HI, 30, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_hi_fields },
+ { TSM_CON1_SAMPLE_LO, 29, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_lo_fields },
+ { TSM_CON2_CONFIG, 31, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con2_config_fields },
+ { TSM_CON2_SAMPLE_HI, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_hi_fields },
+ { TSM_CON2_SAMPLE_LO, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_lo_fields },
+ { TSM_CON3_CONFIG, 34, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con3_config_fields },
+ { TSM_CON3_SAMPLE_HI, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_hi_fields },
+ { TSM_CON3_SAMPLE_LO, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_lo_fields },
+ { TSM_CON4_CONFIG, 37, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con4_config_fields },
+ { TSM_CON4_SAMPLE_HI, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_hi_fields },
+ { TSM_CON4_SAMPLE_LO, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_lo_fields },
+ { TSM_CON5_CONFIG, 40, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con5_config_fields },
+ { TSM_CON5_SAMPLE_HI, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_hi_fields },
+ { TSM_CON5_SAMPLE_LO, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_lo_fields },
+ { TSM_CON6_CONFIG, 43, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con6_config_fields },
+ { TSM_CON6_SAMPLE_HI, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_hi_fields },
+ { TSM_CON6_SAMPLE_LO, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_lo_fields },
+ {
+ TSM_CON7_HOST_SAMPLE_HI, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_hi_fields
+ },
+ {
+ TSM_CON7_HOST_SAMPLE_LO, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_lo_fields
+ },
+ { TSM_CONFIG, 0, 13, NTHW_FPGA_REG_TYPE_RW, 257, 6, tsm_config_fields },
+ { TSM_INT_CONFIG, 2, 20, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_int_config_fields },
+ { TSM_INT_STAT, 3, 20, NTHW_FPGA_REG_TYPE_MIXED, 0, 2, tsm_int_stat_fields },
+ { TSM_LED, 4, 27, NTHW_FPGA_REG_TYPE_RW, 16793600, 12, tsm_led_fields },
+ { TSM_NTTS_CONFIG, 13, 7, NTHW_FPGA_REG_TYPE_RW, 32, 4, tsm_ntts_config_fields },
+ { TSM_NTTS_EXT_STAT, 15, 32, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, tsm_ntts_ext_stat_fields },
+ { TSM_NTTS_LIMIT_HI, 23, 16, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_limit_hi_fields },
+ { TSM_NTTS_LIMIT_LO, 22, 32, NTHW_FPGA_REG_TYPE_RW, 100000, 1, tsm_ntts_limit_lo_fields },
+ { TSM_NTTS_OFFSET, 21, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_offset_fields },
+ { TSM_NTTS_SAMPLE_HI, 19, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_hi_fields },
+ { TSM_NTTS_SAMPLE_LO, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_lo_fields },
+ { TSM_NTTS_STAT, 14, 17, NTHW_FPGA_REG_TYPE_RO, 0, 3, tsm_ntts_stat_fields },
+ { TSM_NTTS_TS_T0_HI, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_hi_fields },
+ { TSM_NTTS_TS_T0_LO, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_lo_fields },
+ {
+ TSM_NTTS_TS_T0_OFFSET, 20, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ntts_ts_t0_offset_fields
+ },
+ { TSM_PB_CTRL, 63, 2, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_ctrl_fields },
+ { TSM_PB_INSTMEM, 64, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_instmem_fields },
+ { TSM_PI_CTRL_I, 54, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_i_fields },
+ { TSM_PI_CTRL_KI, 52, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_ki_fields },
+ { TSM_PI_CTRL_KP, 51, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_kp_fields },
+ { TSM_PI_CTRL_SHL, 53, 4, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_shl_fields },
+ { TSM_STAT, 1, 16, NTHW_FPGA_REG_TYPE_RO, 0, 9, tsm_stat_fields },
+ { TSM_TIMER_CTRL, 48, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_timer_ctrl_fields },
+ { TSM_TIMER_T0, 49, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t0_fields },
+ { TSM_TIMER_T1, 50, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t1_fields },
+ { TSM_TIME_HARDSET_HI, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_hi_fields },
+ { TSM_TIME_HARDSET_LO, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_lo_fields },
+ { TSM_TIME_HI, 9, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_hi_fields },
+ { TSM_TIME_LO, 8, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_lo_fields },
+ { TSM_TIME_RATE_ADJ, 10, 29, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_rate_adj_fields },
+ { TSM_TS_HI, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_hi_fields },
+ { TSM_TS_LO, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_lo_fields },
+ { TSM_TS_OFFSET, 7, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ts_offset_fields },
+ { TSM_TS_STAT, 55, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, tsm_ts_stat_fields },
+ {
+ TSM_TS_STAT_HI_OFFSET, 62, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_hi_offset_fields
+ },
+ {
+ TSM_TS_STAT_LO_OFFSET, 61, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_lo_offset_fields
+ },
+ { TSM_TS_STAT_TAR_HI, 57, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_hi_fields },
+ { TSM_TS_STAT_TAR_LO, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_lo_fields },
+ { TSM_TS_STAT_X, 58, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x_fields },
+ { TSM_TS_STAT_X2_HI, 60, 16, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_hi_fields },
+ { TSM_TS_STAT_X2_LO, 59, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_lo_fields },
+ { TSM_UTC_OFFSET, 65, 8, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_utc_offset_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2627,6 +3018,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
{ MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
+ { MOD_TSM, 0, MOD_TSM, 0, 8, NTHW_FPGA_BUS_TYPE_RAB2, 1024, 66, tsm_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2785,5 +3177,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 37, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index a2ab266931..e8ed7faf0d 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -20,5 +20,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
{ MOD_STA, "STA" },
+ { MOD_TSM, "TSM" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
index a087850aa4..cdb733ee17 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -7,8 +7,158 @@
#define _NTHW_FPGA_REG_DEFS_TSM_
/* TSM */
+#define TSM_CON0_CONFIG (0xf893d371UL)
+#define TSM_CON0_CONFIG_BLIND (0x59ccfcbUL)
+#define TSM_CON0_CONFIG_DC_SRC (0x1879812bUL)
+#define TSM_CON0_CONFIG_PORT (0x3ff0bb08UL)
+#define TSM_CON0_CONFIG_PPSIN_2_5V (0xb8e78227UL)
+#define TSM_CON0_CONFIG_SAMPLE_EDGE (0x4a4022ebUL)
+#define TSM_CON0_INTERFACE (0x76e93b59UL)
+#define TSM_CON0_INTERFACE_EX_TERM (0xd079b416UL)
+#define TSM_CON0_INTERFACE_IN_REF_PWM (0x16f73c33UL)
+#define TSM_CON0_INTERFACE_PWM_ENA (0x3629e73fUL)
+#define TSM_CON0_INTERFACE_RESERVED (0xf9c5066UL)
+#define TSM_CON0_INTERFACE_VTERM_PWM (0x6d2b1e23UL)
+#define TSM_CON0_SAMPLE_HI (0x6e536b8UL)
+#define TSM_CON0_SAMPLE_HI_SEC (0x5fc26159UL)
+#define TSM_CON0_SAMPLE_LO (0x8bea5689UL)
+#define TSM_CON0_SAMPLE_LO_NS (0x13d0010dUL)
+#define TSM_CON1_CONFIG (0x3439d3efUL)
+#define TSM_CON1_CONFIG_BLIND (0x98932ebdUL)
+#define TSM_CON1_CONFIG_DC_SRC (0xa1825ac3UL)
+#define TSM_CON1_CONFIG_PORT (0xe266628dUL)
+#define TSM_CON1_CONFIG_PPSIN_2_5V (0x6f05027fUL)
+#define TSM_CON1_CONFIG_SAMPLE_EDGE (0x2f2719adUL)
+#define TSM_CON1_SAMPLE_HI (0xc76be978UL)
+#define TSM_CON1_SAMPLE_HI_SEC (0xe639bab1UL)
+#define TSM_CON1_SAMPLE_LO (0x4a648949UL)
+#define TSM_CON1_SAMPLE_LO_NS (0x8edfe07bUL)
+#define TSM_CON2_CONFIG (0xbab6d40cUL)
+#define TSM_CON2_CONFIG_BLIND (0xe4f20b66UL)
+#define TSM_CON2_CONFIG_DC_SRC (0xb0ff30baUL)
+#define TSM_CON2_CONFIG_PORT (0x5fac0e43UL)
+#define TSM_CON2_CONFIG_PPSIN_2_5V (0xcc5384d6UL)
+#define TSM_CON2_CONFIG_SAMPLE_EDGE (0x808e5467UL)
+#define TSM_CON2_SAMPLE_HI (0x5e898f79UL)
+#define TSM_CON2_SAMPLE_HI_SEC (0xf744d0c8UL)
+#define TSM_CON2_SAMPLE_LO (0xd386ef48UL)
+#define TSM_CON2_SAMPLE_LO_NS (0xf2bec5a0UL)
+#define TSM_CON3_CONFIG (0x761cd492UL)
+#define TSM_CON3_CONFIG_BLIND (0x79fdea10UL)
+#define TSM_CON3_CONFIG_PORT (0x823ad7c6UL)
+#define TSM_CON3_CONFIG_SAMPLE_EDGE (0xe5e96f21UL)
+#define TSM_CON3_SAMPLE_HI (0x9f0750b9UL)
+#define TSM_CON3_SAMPLE_HI_SEC (0x4ebf0b20UL)
+#define TSM_CON3_SAMPLE_LO (0x12083088UL)
+#define TSM_CON3_SAMPLE_LO_NS (0x6fb124d6UL)
+#define TSM_CON4_CONFIG (0x7cd9dd8bUL)
+#define TSM_CON4_CONFIG_BLIND (0x1c3040d0UL)
+#define TSM_CON4_CONFIG_PORT (0xff49d19eUL)
+#define TSM_CON4_CONFIG_SAMPLE_EDGE (0x4adc9b2UL)
+#define TSM_CON4_SAMPLE_HI (0xb63c453aUL)
+#define TSM_CON4_SAMPLE_HI_SEC (0xd5be043aUL)
+#define TSM_CON4_SAMPLE_LO (0x3b33250bUL)
+#define TSM_CON4_SAMPLE_LO_NS (0xa7c8e16UL)
+#define TSM_CON5_CONFIG (0xb073dd15UL)
+#define TSM_CON5_CONFIG_BLIND (0x813fa1a6UL)
+#define TSM_CON5_CONFIG_PORT (0x22df081bUL)
+#define TSM_CON5_CONFIG_SAMPLE_EDGE (0x61caf2f4UL)
+#define TSM_CON5_SAMPLE_HI (0x77b29afaUL)
+#define TSM_CON5_SAMPLE_HI_SEC (0x6c45dfd2UL)
+#define TSM_CON5_SAMPLE_LO (0xfabdfacbUL)
+#define TSM_CON5_SAMPLE_LO_TIME (0x945d87e8UL)
+#define TSM_CON6_CONFIG (0x3efcdaf6UL)
+#define TSM_CON6_CONFIG_BLIND (0xfd5e847dUL)
+#define TSM_CON6_CONFIG_PORT (0x9f1564d5UL)
+#define TSM_CON6_CONFIG_SAMPLE_EDGE (0xce63bf3eUL)
+#define TSM_CON6_SAMPLE_HI (0xee50fcfbUL)
+#define TSM_CON6_SAMPLE_HI_SEC (0x7d38b5abUL)
+#define TSM_CON6_SAMPLE_LO (0x635f9ccaUL)
+#define TSM_CON6_SAMPLE_LO_NS (0xeb124abbUL)
+#define TSM_CON7_HOST_SAMPLE_HI (0xdcd90e52UL)
+#define TSM_CON7_HOST_SAMPLE_HI_SEC (0xd98d3618UL)
+#define TSM_CON7_HOST_SAMPLE_LO (0x51d66e63UL)
+#define TSM_CON7_HOST_SAMPLE_LO_NS (0x8f5594ddUL)
#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_NTTS_SRC (0x1b60227bUL)
+#define TSM_CONFIG_NTTS_SYNC (0x43e0a69dUL)
+#define TSM_CONFIG_TIMESET_EDGE (0x8c381127UL)
+#define TSM_CONFIG_TIMESET_SRC (0xe7590a31UL)
+#define TSM_CONFIG_TIMESET_UP (0x561980c1UL)
#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_INT_CONFIG (0x9a0d52dUL)
+#define TSM_INT_CONFIG_AUTO_DISABLE (0x9581470UL)
+#define TSM_INT_CONFIG_MASK (0xf00cd3d7UL)
+#define TSM_INT_STAT (0xa4611a70UL)
+#define TSM_INT_STAT_CAUSE (0x315168cfUL)
+#define TSM_INT_STAT_ENABLE (0x980a12d1UL)
+#define TSM_LED (0x6ae05f87UL)
+#define TSM_LED_LED0_BG_COLOR (0x897cf9eeUL)
+#define TSM_LED_LED0_COLOR (0x6d7ada39UL)
+#define TSM_LED_LED0_MODE (0x6087b644UL)
+#define TSM_LED_LED0_SRC (0x4fe29639UL)
+#define TSM_LED_LED1_BG_COLOR (0x66be92d0UL)
+#define TSM_LED_LED1_COLOR (0xcb0dd18dUL)
+#define TSM_LED_LED1_MODE (0xabdb65e1UL)
+#define TSM_LED_LED1_SRC (0x7282bf89UL)
+#define TSM_LED_LED2_BG_COLOR (0x8d8929d3UL)
+#define TSM_LED_LED2_COLOR (0xfae5cb10UL)
+#define TSM_LED_LED2_MODE (0x2d4f174fUL)
+#define TSM_LED_LED2_SRC (0x3522c559UL)
+#define TSM_NTTS_CONFIG (0x8bc38bdeUL)
+#define TSM_NTTS_CONFIG_AUTO_HARDSET (0xd75be25dUL)
+#define TSM_NTTS_CONFIG_EXT_CLK_ADJ (0x700425b6UL)
+#define TSM_NTTS_CONFIG_HIGH_SAMPLE (0x37135b7eUL)
+#define TSM_NTTS_CONFIG_TS_SRC_FORMAT (0x6e6e707UL)
+#define TSM_NTTS_EXT_STAT (0x2b0315b7UL)
+#define TSM_NTTS_EXT_STAT_MASTER_ID (0xf263315eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_REV (0xd543795eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_STAT (0x92d96f5eUL)
+#define TSM_NTTS_LIMIT_HI (0x1ddaa85fUL)
+#define TSM_NTTS_LIMIT_HI_SEC (0x315c6ef2UL)
+#define TSM_NTTS_LIMIT_LO (0x90d5c86eUL)
+#define TSM_NTTS_LIMIT_LO_NS (0xe6d94d9aUL)
+#define TSM_NTTS_OFFSET (0x6436e72UL)
+#define TSM_NTTS_OFFSET_NS (0x12d43a06UL)
+#define TSM_NTTS_SAMPLE_HI (0xcdc8aa3eUL)
+#define TSM_NTTS_SAMPLE_HI_SEC (0x4f6588fdUL)
+#define TSM_NTTS_SAMPLE_LO (0x40c7ca0fUL)
+#define TSM_NTTS_SAMPLE_LO_NS (0x6e43ff97UL)
+#define TSM_NTTS_STAT (0x6502b820UL)
+#define TSM_NTTS_STAT_NTTS_VALID (0x3e184471UL)
+#define TSM_NTTS_STAT_SIGNAL_LOST (0x178bedfdUL)
+#define TSM_NTTS_STAT_SYNC_LOST (0xe4cd53dfUL)
+#define TSM_NTTS_TS_T0_HI (0x1300d1b6UL)
+#define TSM_NTTS_TS_T0_HI_TIME (0xa016ae4fUL)
+#define TSM_NTTS_TS_T0_LO (0x9e0fb187UL)
+#define TSM_NTTS_TS_T0_LO_TIME (0x82006941UL)
+#define TSM_NTTS_TS_T0_OFFSET (0xbf70ce4fUL)
+#define TSM_NTTS_TS_T0_OFFSET_COUNT (0x35dd4398UL)
+#define TSM_PB_CTRL (0x7a8b60faUL)
+#define TSM_PB_CTRL_INSTMEM_WR (0xf96e2cbcUL)
+#define TSM_PB_CTRL_RESET (0xa38ade8bUL)
+#define TSM_PB_CTRL_RST (0x3aaa82f4UL)
+#define TSM_PB_INSTMEM (0xb54aeecUL)
+#define TSM_PB_INSTMEM_MEM_ADDR (0x9ac79b6eUL)
+#define TSM_PB_INSTMEM_MEM_DATA (0x65aefa38UL)
+#define TSM_PI_CTRL_I (0x8d71a4e2UL)
+#define TSM_PI_CTRL_I_VAL (0x98baedc9UL)
+#define TSM_PI_CTRL_KI (0xa1bd86cbUL)
+#define TSM_PI_CTRL_KI_GAIN (0x53faa916UL)
+#define TSM_PI_CTRL_KP (0xc5d62e0bUL)
+#define TSM_PI_CTRL_KP_GAIN (0x7723fa45UL)
+#define TSM_PI_CTRL_SHL (0xaa518701UL)
+#define TSM_PI_CTRL_SHL_VAL (0x56f56a6fUL)
+#define TSM_STAT (0xa55bf677UL)
+#define TSM_STAT_HARD_SYNC (0x7fff20fdUL)
+#define TSM_STAT_LINK_CON0 (0x216086f0UL)
+#define TSM_STAT_LINK_CON1 (0x5667b666UL)
+#define TSM_STAT_LINK_CON2 (0xcf6ee7dcUL)
+#define TSM_STAT_LINK_CON3 (0xb869d74aUL)
+#define TSM_STAT_LINK_CON4 (0x260d42e9UL)
+#define TSM_STAT_LINK_CON5 (0x510a727fUL)
+#define TSM_STAT_NTTS_INSYNC (0xb593a245UL)
+#define TSM_STAT_PTP_MI_PRESENT (0x43131eb0UL)
#define TSM_TIMER_CTRL (0x648da051UL)
#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
@@ -16,13 +166,40 @@
#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
#define TSM_TIMER_T1 (0x36752733UL)
#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HARDSET_HI (0xf28bdb46UL)
+#define TSM_TIME_HARDSET_HI_TIME (0x2d9a28baUL)
+#define TSM_TIME_HARDSET_LO (0x7f84bb77UL)
+#define TSM_TIME_HARDSET_LO_TIME (0xf8cefb4UL)
#define TSM_TIME_HI (0x175acea1UL)
#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
#define TSM_TIME_LO (0x9a55ae90UL)
#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TIME_RATE_ADJ (0xb1cc4bb1UL)
+#define TSM_TIME_RATE_ADJ_FRACTION (0xb7ab96UL)
#define TSM_TS_HI (0xccfe9e5eUL)
#define TSM_TS_HI_TIME (0xc23fed30UL)
#define TSM_TS_LO (0x41f1fe6fUL)
#define TSM_TS_LO_TIME (0xe0292a3eUL)
+#define TSM_TS_OFFSET (0x4b2e6e13UL)
+#define TSM_TS_OFFSET_NS (0x68c286b9UL)
+#define TSM_TS_STAT (0x64d41b8cUL)
+#define TSM_TS_STAT_OVERRUN (0xad9db92aUL)
+#define TSM_TS_STAT_SAMPLES (0xb6350e0bUL)
+#define TSM_TS_STAT_HI_OFFSET (0x1aa2ddf2UL)
+#define TSM_TS_STAT_HI_OFFSET_NS (0xeb040e0fUL)
+#define TSM_TS_STAT_LO_OFFSET (0x81218579UL)
+#define TSM_TS_STAT_LO_OFFSET_NS (0xb7ff33UL)
+#define TSM_TS_STAT_TAR_HI (0x65af24b6UL)
+#define TSM_TS_STAT_TAR_HI_SEC (0x7e92f619UL)
+#define TSM_TS_STAT_TAR_LO (0xe8a04487UL)
+#define TSM_TS_STAT_TAR_LO_NS (0xf7b3f439UL)
+#define TSM_TS_STAT_X (0x419f0ddUL)
+#define TSM_TS_STAT_X_NS (0xa48c3f27UL)
+#define TSM_TS_STAT_X2_HI (0xd6b1c517UL)
+#define TSM_TS_STAT_X2_HI_NS (0x4288c50fUL)
+#define TSM_TS_STAT_X2_LO (0x5bbea526UL)
+#define TSM_TS_STAT_X2_LO_NS (0x92633c13UL)
+#define TSM_UTC_OFFSET (0xf622a13aUL)
+#define TSM_UTC_OFFSET_SEC (0xd9c80209UL)
#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 61/86] net/ntnic: add xstats
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (59 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 60/86] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 62/86] net/ntnic: added flow statistics Serhii Iliushyk
` (25 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Extended statistics implementation and
initialization were added.
eth_dev_ops api was extended with new xstats apis.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 36 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 112 +++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +
drivers/net/ntnic/ntnic_mod_reg.h | 28 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 ++++++++++++++++++
7 files changed, 1022 insertions(+)
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 64351bcdc7..947c7ba3a1 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -13,6 +13,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
Basic stats = Y
+Extended stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 0735dbc085..4d4affa3cf 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -169,6 +169,39 @@ struct port_counters_v2 {
};
struct flm_counters_v1 {
+ /* FLM 0.17 */
+ uint64_t current;
+ uint64_t learn_done;
+ uint64_t learn_ignore;
+ uint64_t learn_fail;
+ uint64_t unlearn_done;
+ uint64_t unlearn_ignore;
+ uint64_t auto_unlearn_done;
+ uint64_t auto_unlearn_ignore;
+ uint64_t auto_unlearn_fail;
+ uint64_t timeout_unlearn_done;
+ uint64_t rel_done;
+ uint64_t rel_ignore;
+ /* FLM 0.20 */
+ uint64_t prb_done;
+ uint64_t prb_ignore;
+ uint64_t sta_done;
+ uint64_t inf_done;
+ uint64_t inf_skip;
+ uint64_t pck_hit;
+ uint64_t pck_miss;
+ uint64_t pck_unh;
+ uint64_t pck_dis;
+ uint64_t csh_hit;
+ uint64_t csh_miss;
+ uint64_t csh_unh;
+ uint64_t cuc_start;
+ uint64_t cuc_move;
+ /* FLM 0.17 Load */
+ uint64_t load_lps;
+ uint64_t load_aps;
+ uint64_t max_lps;
+ uint64_t max_aps;
};
struct nt4ga_stat_s {
@@ -200,6 +233,9 @@ struct nt4ga_stat_s {
struct host_buffer_counters *mp_stat_structs_hb;
struct port_load_counters *mp_port_load;
+ int flm_stat_ver;
+ struct flm_counters_v1 *mp_stat_structs_flm;
+
/* Rx/Tx totals: */
uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index a6c4fec0be..e59ac5bdb3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -31,6 +31,7 @@ sources = files(
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
'ntnic_filter/ntnic_filter.c',
+ 'ntnic_xstats/ntnic_xstats.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 8a9ca2c03d..5635bd3b42 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1496,6 +1496,113 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
return 0;
}
+static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats =
+ ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+
+ struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return dpdk_stats_reset(internals, p_nt_drv, if_index);
+}
+
+static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names(p_nt4ga_stat, xstats_names, size);
+}
+
+static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names_by_id(p_nt4ga_stat, xstats_names, ids,
+ size);
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1592,6 +1699,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
+ .xstats_get = eth_xstats_get,
+ .xstats_get_names = eth_xstats_get_names,
+ .xstats_reset = eth_xstats_reset,
+ .xstats_get_by_id = eth_xstats_get_by_id,
+ .xstats_get_names_by_id = eth_xstats_get_names_by_id,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 355e2032b1..6737d18a6f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -192,3 +192,18 @@ const struct rte_flow_ops *get_dev_flow_ops(void)
return dev_flow_ops;
}
+
+static struct ntnic_xstats_ops *ntnic_xstats_ops;
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops)
+{
+ ntnic_xstats_ops = ops;
+}
+
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void)
+{
+ if (ntnic_xstats_ops == NULL)
+ ntnic_xstats_ops_init();
+
+ return ntnic_xstats_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8703d478b6..65e7972c68 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,10 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_flow_driver.h"
+
#include "flow_api.h"
#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
@@ -354,4 +358,28 @@ void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
+struct ntnic_xstats_ops {
+ int (*nthw_xstats_get_names)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
+ int (*nthw_xstats_get)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port);
+ void (*nthw_xstats_reset)(nt4ga_stat_t *p_nt4ga_stat, uint8_t port);
+ int (*nthw_xstats_get_names_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size);
+ int (*nthw_xstats_get_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port);
+};
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops);
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void);
+void ntnic_xstats_ops_init(void);
+
#endif /* __NTNIC_MOD_REG_H__ */
diff --git a/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
new file mode 100644
index 0000000000..7604afe6a0
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
@@ -0,0 +1,829 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_ethdev.h>
+
+#include "include/ntdrv_4ga.h"
+#include "ntlog.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "stream_binary_flow_api.h"
+#include "ntnic_mod_reg.h"
+
+struct rte_nthw_xstats_names_s {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint8_t source;
+ unsigned int offset;
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.17
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v1[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.18
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v2[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * STA 0.9
+ */
+
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v3[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) },
+
+ /* FLM 0.17 */
+ { "flm_count_load_lps", 3, offsetof(struct flm_counters_v1, load_lps) },
+ { "flm_count_load_aps", 3, offsetof(struct flm_counters_v1, load_aps) },
+ { "flm_count_max_lps", 3, offsetof(struct flm_counters_v1, max_lps) },
+ { "flm_count_max_aps", 3, offsetof(struct flm_counters_v1, max_aps) },
+
+ { "rx_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps) },
+ { "rx_max_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps_max) },
+ { "rx_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps) },
+ { "rx_max_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps_max) },
+ { "tx_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps) },
+ { "tx_max_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps_max) },
+ { "tx_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps) },
+ { "tx_max_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps_max) }
+};
+
+#define NTHW_CAP_XSTATS_NAMES_V1 RTE_DIM(nthw_cap_xstats_names_v1)
+#define NTHW_CAP_XSTATS_NAMES_V2 RTE_DIM(nthw_cap_xstats_names_v2)
+#define NTHW_CAP_XSTATS_NAMES_V3 RTE_DIM(nthw_cap_xstats_names_v3)
+
+/*
+ * Container for the reset values
+ */
+#define NTHW_XSTATS_SIZE NTHW_CAP_XSTATS_NAMES_V3
+
+static uint64_t nthw_xstats_reset_val[NUM_ADAPTER_PORTS_MAX][NTHW_XSTATS_SIZE] = { 0 };
+
+/*
+ * These functions must only be called with stat mutex locked
+ */
+static int nthw_xstats_get(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n && i < nb_names; i++) {
+ stats[i].id = i;
+
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ stats[i].value = *((uint64_t *)&rx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 2:
+ /* TX stat */
+ stats[i].value = *((uint64_t *)&tx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ stats[i].value = *((uint64_t *)&flm_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[0][i];
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ stats[i].value = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ default:
+ stats[i].value = 0;
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int nthw_xstats_get_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+ int count = 0;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] < nb_names) {
+ switch (names[ids[i]].source) {
+ case 1:
+ /* RX stat */
+ values[i] = *((uint64_t *)&rx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 2:
+ /* TX stat */
+ values[i] = *((uint64_t *)&tx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ values[i] = *((uint64_t *)&flm_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[0][ids[i]];
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ values[i] = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ default:
+ values[i] = 0;
+ break;
+ }
+
+ count++;
+ }
+ }
+
+ return count;
+}
+
+static void nthw_xstats_reset(nt4ga_stat_t *p_nt4ga_stat, uint8_t port)
+{
+ unsigned int i;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < nb_names; i++) {
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&rx_ptr[names[i].offset]);
+ break;
+
+ case 2:
+ /* TX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&tx_ptr[names[i].offset]);
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ /* Reset makes no sense for flm_count_current */
+ /* Reset can't be used for load_lps, load_aps, max_lps and max_aps */
+ if (flm_ptr &&
+ (strcmp(names[i].name, "flm_count_current") != 0 &&
+ strcmp(names[i].name, "flm_count_load_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_load_aps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_aps") != 0)) {
+ nthw_xstats_reset_val[0][i] =
+ *((uint64_t *)&flm_ptr[names[i].offset]);
+ }
+
+ break;
+
+ case 4:
+ /* Port load stat*/
+ /* No reset */
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/*
+ * These functions does not require stat mutex locked
+ */
+static int nthw_xstats_get_names(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size && i < nb_names; i++) {
+ strlcpy(xstats_names[i].name, names[i].name, sizeof(xstats_names[i].name));
+ count++;
+ }
+
+ return count;
+}
+
+static int nthw_xstats_get_names_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] < nb_names) {
+ strlcpy(xstats_names[i].name,
+ names[ids[i]].name,
+ RTE_ETH_XSTATS_NAME_SIZE);
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+static struct ntnic_xstats_ops ops = {
+ .nthw_xstats_get_names = nthw_xstats_get_names,
+ .nthw_xstats_get = nthw_xstats_get,
+ .nthw_xstats_reset = nthw_xstats_reset,
+ .nthw_xstats_get_names_by_id = nthw_xstats_get_names_by_id,
+ .nthw_xstats_get_by_id = nthw_xstats_get_by_id
+};
+
+void ntnic_xstats_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "xstats module was initialized");
+ register_ntnic_xstats_ops(&ops);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 62/86] net/ntnic: added flow statistics
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (60 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 61/86] net/ntnic: add xstats Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 63/86] net/ntnic: add scrub registers Serhii Iliushyk
` (24 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
xstats was extended with flow statistics support.
Additional counters that shows learn, unlearn, lps, aps
and other.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 40 ++++
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +-
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 142 ++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.c | 176 ++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 52 ++++++
.../profile_inline/flow_api_profile_inline.c | 46 +++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +++++
drivers/net/ntnic/ntnic_ethdev.c | 132 +++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +
13 files changed, 656 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 3afc5b7853..8fedfdcd04 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -189,6 +189,24 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return -1;
}
+ if (get_flow_filter_ops() != NULL) {
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+ p_nt4ga_stat->flm_stat_ver = ndev->be.flm.ver;
+ p_nt4ga_stat->mp_stat_structs_flm = calloc(1, sizeof(struct flm_counters_v1));
+
+ if (!p_nt4ga_stat->mp_stat_structs_flm) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_APS_MAX, 0);
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_LPS_MAX, 0);
+ }
+
p_nt4ga_stat->mp_port_load =
calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
@@ -236,6 +254,7 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
return -1;
nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
@@ -542,6 +561,27 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
(uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
}
+ /* Update and get FLM stats */
+ flow_filter_ops->flow_get_flm_stats(ndev, (uint64_t *)p_nt4ga_stat->mp_stat_structs_flm,
+ sizeof(struct flm_counters_v1) / sizeof(uint64_t));
+
+ /*
+ * Calculate correct load values:
+ * rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ * bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) - 1ULL);
+ * load_aps = ((uint64_t)load_aps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ * load_lps = ((uint64_t)load_lps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ *
+ * Simplified it gives:
+ *
+ * load_lps = (load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ * load_aps = (load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ */
+
+ p_nt4ga_stat->mp_stat_structs_flm->load_aps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
+ p_nt4ga_stat->mp_stat_structs_flm->load_lps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
return 0;
}
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 17d5755634..9cd9d92823 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 38e4d0ca35..677aa7b6c8 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -17,6 +17,7 @@ typedef struct ntdrv_4ga_s {
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
rte_thread_t stat_thread;
+ rte_thread_t port_event_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e59ac5bdb3..c0b7729929 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -59,6 +59,7 @@ sources = files(
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
+ 'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index e953fc1a12..efe9a1a3b9 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1050,11 +1050,14 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
- (void)ndev;
- (void)data;
- (void)size;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ return -1;
+
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE)
+ return profile_inline_ops->flow_get_flm_stats_profile_inline(ndev, data, size);
- NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
return -1;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f4c29b8bde..1845f74166 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,148 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_stat_update(be->be_dev, &be->flm);
+}
+
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STAT_LRN_DONE:
+ *value = be->flm.v25.lrn_done->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_IGNORE:
+ *value = be->flm.v25.lrn_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_FAIL:
+ *value = be->flm.v25.lrn_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_DONE:
+ *value = be->flm.v25.unl_done->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_IGNORE:
+ *value = be->flm.v25.unl_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_DONE:
+ *value = be->flm.v25.rel_done->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_IGNORE:
+ *value = be->flm.v25.rel_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_DONE:
+ *value = be->flm.v25.prb_done->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_IGNORE:
+ *value = be->flm.v25.prb_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_DONE:
+ *value = be->flm.v25.aul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_IGNORE:
+ *value = be->flm.v25.aul_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_FAIL:
+ *value = be->flm.v25.aul_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_TUL_DONE:
+ *value = be->flm.v25.tul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_FLOWS:
+ *value = be->flm.v25.flows->cnt;
+ break;
+
+ case HW_FLM_LOAD_LPS:
+ *value = be->flm.v25.load_lps->lps;
+ break;
+
+ case HW_FLM_LOAD_APS:
+ *value = be->flm.v25.load_aps->aps;
+ break;
+
+ default: {
+ if (_VER_ < 18)
+ return UNSUP_FIELD;
+
+ switch (field) {
+ case HW_FLM_STAT_STA_DONE:
+ *value = be->flm.v25.sta_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_DONE:
+ *value = be->flm.v25.inf_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_SKIP:
+ *value = be->flm.v25.inf_skip->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_HIT:
+ *value = be->flm.v25.pck_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_MISS:
+ *value = be->flm.v25.pck_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_UNH:
+ *value = be->flm.v25.pck_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_DIS:
+ *value = be->flm.v25.pck_dis->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_HIT:
+ *value = be->flm.v25.csh_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_MISS:
+ *value = be->flm.v25.csh_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_UNH:
+ *value = be->flm.v25.csh_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_START:
+ *value = be->flm.v25.cuc_start->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_MOVE:
+ *value = be->flm.v25.cuc_move->cnt;
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+ }
+ break;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
new file mode 100644
index 0000000000..98b0e8347a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -0,0 +1,176 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+#include <rte_errno.h>
+
+#include "ntlog.h"
+#include "flm_evt_queue.h"
+
+/* Local queues for flm statistic events */
+static struct rte_ring *info_q_local[MAX_INFO_LCL_QUEUES];
+
+/* Remote queues for flm statistic events */
+static struct rte_ring *info_q_remote[MAX_INFO_RMT_QUEUES];
+
+/* Local queues for flm status records */
+static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
+
+/* Remote queues for flm status records */
+static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+
+
+static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
+{
+ static_assert((FLM_EVT_ELEM_SIZE & ~(size_t)3) == FLM_EVT_ELEM_SIZE,
+ "FLM EVENT struct size");
+ static_assert((FLM_STAT_ELEM_SIZE & ~(size_t)3) == FLM_STAT_ELEM_SIZE,
+ "FLM STAT struct size");
+ char name[20] = "NONE";
+ struct rte_ring *q;
+ uint32_t elem_size = 0;
+ uint32_t queue_size = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port >= MAX_INFO_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_INFO_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port >= MAX_INFO_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_INFO_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port >= MAX_STAT_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_STAT_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port >= MAX_STAT_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_STAT_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue create illegal caller: %u", caller);
+ return NULL;
+ }
+
+ q = rte_ring_create_elem(name,
+ elem_size,
+ queue_size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN, FILTER, "FLM queues cannot be created due to error %02X", rte_errno);
+ return NULL;
+ }
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ info_q_local[port] = q;
+ break;
+
+ case FLM_INFO_REMOTE:
+ info_q_remote[port] = q;
+ break;
+
+ case FLM_STAT_LOCAL:
+ stat_q_local[port] = q;
+ break;
+
+ case FLM_STAT_REMOTE:
+ stat_q_remote[port] = q;
+ break;
+
+ default:
+ break;
+ }
+
+ return q;
+}
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES) {
+ if (info_q_local[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_local[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_LOCAL) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES) {
+ if (info_q_remote[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_remote[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_REMOTE) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ return -ENOENT;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
new file mode 100644
index 0000000000..238be7a3b2
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_EVT_QUEUE_H_
+#define _FLM_EVT_QUEUE_H_
+
+#include "stdint.h"
+#include "stdbool.h"
+
+struct flm_status_event_s {
+ void *flow;
+ uint32_t learn_ignore : 1;
+ uint32_t learn_failed : 1;
+ uint32_t learn_done : 1;
+};
+
+struct flm_info_event_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum {
+ FLM_INFO_LOCAL,
+ FLM_INFO_REMOTE,
+ FLM_STAT_LOCAL,
+ FLM_STAT_REMOTE,
+};
+
+/* Max number of local queues */
+#define MAX_INFO_LCL_QUEUES 8
+#define MAX_STAT_LCL_QUEUES 8
+
+/* Max number of remote queues */
+#define MAX_INFO_RMT_QUEUES 128
+#define MAX_STAT_RMT_QUEUES 128
+
+/* queue size */
+#define FLM_EVT_QUEUE_SIZE 8192
+#define FLM_STAT_QUEUE_SIZE 8192
+
+/* Event element size */
+#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
+#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+
+#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 73e3c05f56..9ad165bb4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4466,6 +4466,48 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
return 0;
}
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ const enum hw_flm_e fields[] = {
+ HW_FLM_STAT_FLOWS, HW_FLM_STAT_LRN_DONE, HW_FLM_STAT_LRN_IGNORE,
+ HW_FLM_STAT_LRN_FAIL, HW_FLM_STAT_UNL_DONE, HW_FLM_STAT_UNL_IGNORE,
+ HW_FLM_STAT_AUL_DONE, HW_FLM_STAT_AUL_IGNORE, HW_FLM_STAT_AUL_FAIL,
+ HW_FLM_STAT_TUL_DONE, HW_FLM_STAT_REL_DONE, HW_FLM_STAT_REL_IGNORE,
+ HW_FLM_STAT_PRB_DONE, HW_FLM_STAT_PRB_IGNORE,
+
+ HW_FLM_STAT_STA_DONE, HW_FLM_STAT_INF_DONE, HW_FLM_STAT_INF_SKIP,
+ HW_FLM_STAT_PCK_HIT, HW_FLM_STAT_PCK_MISS, HW_FLM_STAT_PCK_UNH,
+ HW_FLM_STAT_PCK_DIS, HW_FLM_STAT_CSH_HIT, HW_FLM_STAT_CSH_MISS,
+ HW_FLM_STAT_CSH_UNH, HW_FLM_STAT_CUC_START, HW_FLM_STAT_CUC_MOVE,
+
+ HW_FLM_LOAD_LPS, HW_FLM_LOAD_APS,
+ };
+
+ const uint64_t fields_cnt = sizeof(fields) / sizeof(enum hw_flm_e);
+
+ if (!ndev->flow_mgnt_prepared)
+ return 0;
+
+ if (size < fields_cnt)
+ return -1;
+
+ hw_mod_flm_stat_update(&ndev->be);
+
+ for (uint64_t i = 0; i < fields_cnt; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_stat_get(&ndev->be, fields[i], &value);
+ data[i] = (fields[i] == HW_FLM_STAT_FLOWS || fields[i] == HW_FLM_LOAD_LPS ||
+ fields[i] == HW_FLM_LOAD_APS)
+ ? value
+ : data[i] + value;
+
+ if (ndev->be.flm.ver < 18 && fields[i] == HW_FLM_STAT_PRB_IGNORE)
+ break;
+ }
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4482,6 +4524,10 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * Stats
+ */
+ .flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index c695842077..b44d3a7291 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -52,4 +52,10 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+/*
+ * Stats
+ */
+
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/rte_pmd_ntnic.h b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
new file mode 100644
index 0000000000..4a1ba18a5e
--- /dev/null
+++ b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
@@ -0,0 +1,43 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTNIC_EVENT_H_
+#define NTNIC_EVENT_H_
+
+#include <rte_ethdev.h>
+
+typedef struct ntnic_flm_load_s {
+ uint64_t lookup;
+ uint64_t lookup_maximum;
+ uint64_t access;
+ uint64_t access_maximum;
+} ntnic_flm_load_t;
+
+typedef struct ntnic_port_load_s {
+ uint64_t rx_pps;
+ uint64_t rx_pps_maximum;
+ uint64_t tx_pps;
+ uint64_t tx_pps_maximum;
+ uint64_t rx_bps;
+ uint64_t rx_bps_maximum;
+ uint64_t tx_bps;
+ uint64_t tx_bps_maximum;
+} ntnic_port_load_t;
+
+struct ntnic_flm_statistic_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum rte_ntnic_event_type {
+ RTE_NTNIC_FLM_LOAD_EVENT = RTE_ETH_EVENT_MAX,
+ RTE_NTNIC_PORT_LOAD_EVENT,
+ RTE_NTNIC_FLM_STATS_EVENT,
+};
+
+#endif /* NTNIC_EVENT_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 5635bd3b42..4a0dafeff0 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,8 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_evt_queue.h"
+#include "rte_pmd_ntnic.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
@@ -1419,6 +1421,7 @@ drv_deinit(struct drv_s *p_drv)
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
}
/* stop adapter */
@@ -1709,6 +1712,123 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.rss_hash_conf_get = rss_hash_conf_get,
};
+/*
+ * Port event thread
+ */
+THREAD_FUNC port_event_thread_fn(void *context)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[internals->port_id];
+ uint8_t port_no = internals->port;
+
+ ntnic_flm_load_t flmdata;
+ ntnic_port_load_t portdata;
+
+ memset(&flmdata, 0, sizeof(flmdata));
+ memset(&portdata, 0, sizeof(portdata));
+
+ while (ndev != NULL && ndev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ /*
+ * FLM load measurement
+ * Do only send event, if there has been a change
+ */
+ if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
+ if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
+ flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
+ flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
+ flmdata.lookup_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps;
+ flmdata.access_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_FLM_LOAD_EVENT,
+ &flmdata);
+ }
+ }
+ }
+
+ /*
+ * Port load measurement
+ * Do only send event, if there has been a change.
+ */
+ if (p_nt4ga_stat->mp_port_load) {
+ if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
+ portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
+ portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
+ portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
+ portdata.tx_pps = p_nt4ga_stat->mp_port_load[port_no].tx_pps;
+ portdata.rx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_pps_max;
+ portdata.tx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_pps_max;
+ portdata.rx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
+ portdata.tx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_PORT_LOAD_EVENT,
+ &portdata);
+ }
+ }
+ }
+
+ /* Process events */
+ {
+ int count = 0;
+ bool do_wait = true;
+
+ while (count < 5000) {
+ /* Local FLM statistic events */
+ struct flm_info_event_s data;
+
+ if (flm_inf_queue_get(port_no, FLM_INFO_LOCAL, &data) == 0) {
+ if (eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ struct ntnic_flm_statistic_s event_data;
+ event_data.bytes = data.bytes;
+ event_data.packets = data.packets;
+ event_data.cause = data.cause;
+ event_data.id = data.id;
+ event_data.timestamp = data.timestamp;
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)
+ RTE_NTNIC_FLM_STATS_EVENT,
+ &event_data);
+ do_wait = false;
+ }
+ }
+
+ if (do_wait)
+ nt_os_wait_usec(10);
+
+ count++;
+ do_wait = true;
+ }
+ }
+ }
+
+ return THREAD_RETURN;
+}
+
/*
* Adapter flm stat thread
*/
@@ -2235,6 +2355,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+
+ /* Port event thread */
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
+ port_event_thread_fn, (void *)internals);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
}
return 0;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 65e7972c68..7325bd1ea8 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -290,6 +290,13 @@ struct profile_inline_ops {
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+ /*
+ * Stats
+ */
+ int (*flow_get_flm_stats_profile_inline)(struct flow_nic_dev *ndev,
+ uint64_t *data,
+ uint64_t size);
+
/*
* NT Flow FLM queue API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 63/86] net/ntnic: add scrub registers
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (61 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 62/86] net/ntnic: added flow statistics Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 64/86] net/ntnic: update documentation Serhii Iliushyk
` (23 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Scrub fields were added to the fpga map file
Remove duplicated macro
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 17 ++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 3 ---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 620968ceb6..f1033ca949 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -728,7 +728,7 @@ static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
{ FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
{ FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
{ FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
- { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 }, { FLM_LRN_DATA_SCRUB_PROF, 4, 712, 0x0000 },
{ FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
{ FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
{ FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
@@ -782,6 +782,18 @@ static nthw_fpga_field_init_s flm_scan_fields[] = {
{ FLM_SCAN_I, 16, 0, 0 },
};
+static nthw_fpga_field_init_s flm_scrub_ctrl_fields[] = {
+ { FLM_SCRUB_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_SCRUB_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scrub_data_fields[] = {
+ { FLM_SCRUB_DATA_DEL, 1, 12, 0 },
+ { FLM_SCRUB_DATA_INF, 1, 13, 0 },
+ { FLM_SCRUB_DATA_R, 4, 8, 0 },
+ { FLM_SCRUB_DATA_T, 8, 0, 0 },
+};
+
static nthw_fpga_field_init_s flm_status_fields[] = {
{ FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
{ FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
@@ -921,6 +933,8 @@ static nthw_fpga_register_init_s flm_registers[] = {
{ FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
{ FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
{ FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_SCRUB_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_scrub_ctrl_fields },
+ { FLM_SCRUB_DATA, 11, 14, NTHW_FPGA_REG_TYPE_WO, 0, 4, flm_scrub_data_fields },
{ FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
{ FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
{ FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
@@ -3058,6 +3072,7 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
+ { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 4a0dafeff0..a212b3ab07 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -47,9 +47,6 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1)
#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1)
-/* Max RSS queues */
-#define MAX_QUEUES 125
-
#define NUM_VQ_SEGS(_data_size_) \
({ \
size_t _size = (_data_size_); \
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 64/86] net/ntnic: update documentation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (62 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 63/86] net/ntnic: add scrub registers Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-30 1:55 ` Ferruh Yigit
2024-10-29 16:42 ` [PATCH v4 65/86] net/ntnic: add flow aging API Serhii Iliushyk
` (22 subsequent siblings)
86 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Update required documentation
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 30 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 2 ++
2 files changed, 32 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 2c160ae592..e7e1cbcff7 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -40,6 +40,36 @@ Features
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
+- Multiple TX and RX queues.
+- Scattered and gather for TX and RX.
+- RSS hash
+- RSS key update
+- RSS based on VLAN or 5-tuple.
+- RSS using different combinations of fields: L3 only, L4 only or both, and
+ source only, destination only or both.
+- Several RSS hash keys, one for each flow type.
+- Default RSS operation with no hash key specification.
+- VLAN filtering.
+- RX VLAN stripping via raw decap.
+- TX VLAN insertion via raw encap.
+- Flow API.
+- Multiple process.
+- Tunnel types: GTP.
+- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
+ verification.
+- Support for multiple rte_flow groups.
+- Encapsulation and decapsulation of GTP data.
+- Packet modification: NAT, TTL decrement, DSCP tagging
+- Traffic mirroring.
+- Jumbo frame support.
+- Port and queue statistics.
+- RMON statistics in extended stats.
+- Flow metering, including meter policy API.
+- Link state information.
+- CAM and TCAM based matching.
+- Exact match of 140 million flows and policies.
+- Basic stats
+- Extended stats
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index fa4822d928..75769d1992 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -160,6 +160,8 @@ New Features
* Added NT flow backend initialization.
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+ * Added flow handling API
+ * Added statistics API
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 65/86] net/ntnic: add flow aging API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (63 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 64/86] net/ntnic: update documentation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 66/86] net/ntnic: add aging API to the inline profile Serhii Iliushyk
` (21 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
add flow aging API to the ops structure
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 71 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 88 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 21 +++++
3 files changed, 180 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index efe9a1a3b9..b101a9462e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1048,6 +1048,70 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+static int flow_get_aged_flows(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline_ops uninitialized");
+ return -1;
+ }
+
+ if (nb_contexts > 0 && !context) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "rte_flow_get_aged_flows - empty context";
+ return -1;
+ }
+
+ return profile_inline_ops->flow_get_aged_flows_profile_inline(dev, caller_id, context,
+ nb_contexts, error);
+}
+
+static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_info;
+ (void)queue_info;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_attr;
+ (void)queue_attr;
+ (void)nb_queue;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1076,6 +1140,13 @@ static const struct flow_filter_ops ops = {
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
+ .flow_get_aged_flows = flow_get_aged_flows,
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ .flow_info_get = flow_info_get,
+ .flow_configure = flow_configure,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index ef69064f98..6d65ffd38f 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -731,6 +731,91 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_get_aged_flows(internals->flw_dev, caller_id, context,
+ nb_contexts, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+/*
+ * NT Flow asynchronous operations API
+ */
+
+static int eth_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_info_get(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (struct rte_flow_port_info *)port_info,
+ (struct rte_flow_queue_info *)queue_info,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_configure(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (const struct rte_flow_port_attr *)port_attr,
+ nb_queue,
+ (const struct rte_flow_queue_attr **)queue_attr,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -857,6 +942,9 @@ static const struct rte_flow_ops dev_flow_ops = {
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
+ .get_aged_flows = eth_flow_get_aged_flows,
+ .info_get = eth_flow_info_get,
+ .configure = eth_flow_configure,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 7325bd1ea8..52f197e873 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -286,6 +286,12 @@ struct profile_inline_ops {
FILE *file,
struct rte_flow_error *error);
+ int (*flow_get_aged_flows_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -355,6 +361,21 @@ struct flow_filter_ops {
int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
+
+ int (*flow_get_aged_flows)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
+ int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 66/86] net/ntnic: add aging API to the inline profile
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (64 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 65/86] net/ntnic: add flow aging API Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 67/86] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
` (20 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Added implementation for flow get aging API.
Module which operate with age queue was extended with
get, count and size operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../flow_api/profile_inline/flm_age_queue.c | 49 ++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 24 +++++++++
.../profile_inline/flow_api_profile_inline.c | 51 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 6 +++
5 files changed, 131 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index c0b7729929..8c6d02a5ec 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -58,6 +58,7 @@ sources = files(
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_age_queue.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
new file mode 100644
index 0000000000..f6f04009fe
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -0,0 +1,49 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <rte_ring.h>
+
+#include "ntlog.h"
+#include "flm_age_queue.h"
+
+/* Queues for flm aged events */
+static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue empty");
+
+ return ret;
+ }
+
+ return -ENOENT;
+}
+
+unsigned int flm_age_queue_count(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_count(age_queue[caller_id]);
+
+ return ret;
+}
+
+unsigned int flm_age_queue_get_size(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_get_size(age_queue[caller_id]);
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
new file mode 100644
index 0000000000..d61609cc01
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -0,0 +1,24 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_AGE_QUEUE_H_
+#define _FLM_AGE_QUEUE_H_
+
+#include "stdint.h"
+
+struct flm_age_event_s {
+ void *context;
+};
+
+/* Max number of event queues */
+#define MAX_EVT_AGE_QUEUES 256
+
+#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
+unsigned int flm_age_queue_count(uint16_t caller_id);
+unsigned int flm_age_queue_get_size(uint16_t caller_id);
+
+#endif /* _FLM_AGE_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9ad165bb4e..cdec414144 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -7,6 +7,7 @@
#include "nt_util.h"
#include "hw_mod_backend.h"
+#include "flm_age_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -4394,6 +4395,55 @@ static void dump_flm_data(const uint32_t *data, FILE *file)
}
}
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ unsigned int queue_size = flm_age_queue_get_size(caller_id);
+
+ if (queue_size == 0) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size is not configured";
+ return -1;
+ }
+
+ unsigned int queue_count = flm_age_queue_count(caller_id);
+
+ if (context == NULL)
+ return queue_count;
+
+ if (queue_count < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size contains fewer records than the expected output";
+ return -1;
+ }
+
+ if (queue_size < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Defined aged queue size is smaller than the expected output";
+ return -1;
+ }
+
+ uint32_t idx;
+
+ for (idx = 0; idx < nb_contexts; ++idx) {
+ struct flm_age_event_s obj;
+ int ret = flm_age_queue_get(caller_id, &obj);
+
+ if (ret != 0)
+ break;
+
+ context[idx] = obj.context;
+ }
+
+ return idx;
+}
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -4524,6 +4574,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ .flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
/*
* Stats
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b44d3a7291..e1934bc6a6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -48,6 +48,12 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
FILE *file,
struct rte_flow_error *error);
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 67/86] net/ntnic: add flow info and flow configure APIs
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (65 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 66/86] net/ntnic: add aging API to the inline profile Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 68/86] net/ntnic: add flow aging event Serhii Iliushyk
` (19 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with flow info and create APIS.
Module which operate with age queue was extended with
create and free operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 19 +----
.../flow_api/profile_inline/flm_age_queue.c | 77 +++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 5 ++
.../profile_inline/flow_api_profile_inline.c | 62 ++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 9 +++
drivers/net/ntnic/ntnic_mod_reg.h | 9 +++
8 files changed, 169 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index ed96f77bc0..89f071d982 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -77,6 +77,9 @@ struct flow_eth_dev {
/* QSL_HSH index if RSS needed QSL v6+ */
int rss_target_id;
+ /* The size of buffer for aged out flow list */
+ uint32_t nb_aging_objects;
+
struct flow_eth_dev *next;
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 155a9e1fd6..604a896717 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -320,6 +320,7 @@ struct flow_handle {
uint32_t flm_teid;
uint8_t flm_rqi;
uint8_t flm_qfi;
+ uint8_t flm_scrub_prof;
};
};
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index b101a9462e..5349dc84ab 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1075,12 +1075,6 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_info;
- (void)queue_info;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1088,20 +1082,14 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_info_get_profile_inline(dev, caller_id, port_info,
+ queue_info, error);
}
static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_attr;
- (void)queue_attr;
- (void)nb_queue;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1109,7 +1097,8 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_configure_profile_inline(dev, caller_id, port_attr,
+ nb_queue, queue_attr, error);
}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index f6f04009fe..fbc947ee1d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -4,12 +4,89 @@
*/
#include <rte_ring.h>
+#include <rte_errno.h>
#include "ntlog.h"
#include "flm_age_queue.h"
/* Queues for flm aged events */
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+
+void flm_age_queue_free(uint8_t port, uint16_t caller_id)
+{
+ struct rte_ring *q = NULL;
+
+ if (port < MAX_EVT_AGE_PORTS)
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ q = age_queue[caller_id];
+ age_queue[caller_id] = NULL;
+ }
+
+ if (q != NULL)
+ rte_ring_free(q);
+}
+
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
+{
+ char name[20];
+ struct rte_ring *q = NULL;
+
+ if (rte_is_power_of_2(count) == false || count > RTE_RING_SZ_MASK) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue number of elements (%u) is invalid, must be power of 2, and not exceed %u",
+ count,
+ RTE_RING_SZ_MASK);
+ return NULL;
+ }
+
+ if (port >= MAX_EVT_AGE_PORTS) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_EVT_AGE_PORTS - 1);
+ return NULL;
+ }
+
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id >= MAX_EVT_AGE_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for caller_id %u. Max supported caller_id is %u",
+ caller_id,
+ MAX_EVT_AGE_QUEUES - 1);
+ return NULL;
+ }
+
+ if (age_queue[caller_id] != NULL) {
+ NT_LOG(DBG, FILTER, "FLM aged event queue %u already created", caller_id);
+ return age_queue[caller_id];
+ }
+
+ snprintf(name, 20, "AGE_EVENT%u", caller_id);
+ q = rte_ring_create_elem(name,
+ FLM_AGE_ELEM_SIZE,
+ count,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created due to error %02X",
+ rte_errno);
+ return NULL;
+ }
+
+ age_queue[caller_id] = q;
+
+ return q;
+}
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index d61609cc01..9ff6ef6de0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -15,8 +15,13 @@ struct flm_age_event_s {
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
+/* Max number of event ports */
+#define MAX_EVT_AGE_PORTS 128
+
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index cdec414144..1824c931fe 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -490,7 +490,7 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->ft = fh->flm_ft;
learn_record->kid = fh->flm_kid;
learn_record->eor = 1;
- learn_record->scrub_prof = 0;
+ learn_record->scrub_prof = fh->flm_scrub_prof;
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
return 0;
@@ -2438,6 +2438,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_rpl_ext_ptr = rpl_ext_ptr;
fh->flm_prio = (uint8_t)priority;
fh->flm_ft = (uint8_t)flm_ft;
+ fh->flm_scrub_prof = (uint8_t)flm_scrub;
for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
switch (fd->modify_field[i].select) {
@@ -4558,6 +4559,63 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ (void)queue_info;
+ (void)caller_id;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+ memset(port_info, 0, sizeof(struct rte_flow_port_info));
+
+ port_info->max_nb_aging_objects = dev->nb_aging_objects;
+
+ return res;
+}
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ (void)nb_queue;
+ (void)queue_attr;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (port_attr->nb_aging_objects > 0) {
+ if (dev->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ struct rte_ring *age_queue =
+ flm_age_queue_create(dev->port_id, caller_id, port_attr->nb_aging_objects);
+
+ if (age_queue == NULL) {
+ error->message = "Failed to allocate aging objects";
+ goto error_out;
+ }
+
+ dev->nb_aging_objects = port_attr->nb_aging_objects;
+ }
+
+ return res;
+
+error_out:
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+
+ if (port_attr->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ return -1;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4579,6 +4637,8 @@ static const struct profile_inline_ops ops = {
* Stats
*/
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
+ .flow_info_get_profile_inline = flow_info_get_profile_inline,
+ .flow_configure_profile_inline = flow_configure_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e1934bc6a6..ea1d9c31b2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -64,4 +64,13 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 52f197e873..15da911ca7 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -309,6 +309,15 @@ struct profile_inline_ops {
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
+
+ int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 68/86] net/ntnic: add flow aging event
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (66 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 67/86] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 69/86] net/ntnic: add termination thread Serhii Iliushyk
` (18 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Port thread was extended with new age event callback handler.
LRN, INF, STA registers getter setter was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 13 ++
drivers/net/ntnic/include/hw_mod_backend.h | 11 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 16 ++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 158 +++++++++++++++
.../flow_api/profile_inline/flm_age_queue.c | 28 +++
.../flow_api/profile_inline/flm_age_queue.h | 12 ++
.../flow_api/profile_inline/flm_evt_queue.c | 20 ++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_hw_db_inline.c | 142 +++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 84 ++++----
.../profile_inline/flow_api_profile_inline.c | 183 ++++++++++++++++++
.../flow_api_profile_inline_config.h | 21 +-
drivers/net/ntnic/ntnic_ethdev.c | 16 ++
14 files changed, 671 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 604a896717..c75e7cff83 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -148,6 +148,14 @@ struct hsh_def_s {
const uint8_t *key; /* Hash key. */
};
+/*
+ * AGE configuration, see struct rte_flow_action_age
+ */
+struct age_def_s {
+ uint32_t timeout;
+ void *context;
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -264,6 +272,11 @@ struct nic_flow_def {
* Hash module RSS definitions
*/
struct hsh_def_s hsh;
+
+ /*
+ * AGE action timeout
+ */
+ struct age_def_s age;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 9cd9d92823..7a36e4c6d6 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be);
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
@@ -695,8 +698,16 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
uint32_t *sta_word_cnt);
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt);
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc);
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
struct hsh_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 5635ac4524..a3f5e1d7f7 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -129,3 +129,19 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
pthread_mutex_unlock(&handle->mtx);
}
+
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
+
+ *caller_id = element->caller_id;
+ *type = element->type;
+ memcpy(flm_h, &element->handle, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index e190fe4a11..edb4f42729 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -20,4 +20,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
uint8_t type);
void ntnic_id_table_free_id(void *id_table, uint32_t id);
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 1845f74166..14dd95a150 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,52 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_buf_ctrl_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_buf_ctrl_mod_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value)
+{
+ int get = 1; /* Only get supported */
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_BUF_CTRL_LRN_FREE:
+ GET_SET(be->flm.v25.buf_ctrl->lrn_free, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_INF_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->inf_avail, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_STA_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->sta_avail, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_buf_ctrl_mod_get(be, field, value);
+}
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
{
return be->iface->flm_stat_update(be->be_dev, &be->flm);
@@ -887,3 +933,115 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
return ret;
}
+
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_INF_STA_DATA:
+ be->iface->flm_inf_sta_data_update(be->be_dev, &be->flm, inf_value,
+ inf_size, inf_word_cnt, sta_value,
+ sta_size, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+/*
+ * SCRUB timeout support functions to encode users' input into FPGA 8-bit time format:
+ * Timeout in seconds (2^30 nanoseconds); zero means disabled. Value is:
+ *
+ * (T[7:3] != 0) ? ((8 + T[2:0]) shift-left (T[7:3] - 1)) : T[2:0]
+ *
+ * The maximum allowed value is 0xEF (127 years).
+ *
+ * Note that this represents a lower bound on the timeout, depending on the flow
+ * scanner interval and overall load, the timeout can be substantially longer.
+ */
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc)
+{
+ uint8_t t_bits_2_0 = t_enc & 0x07;
+ uint8_t t_bits_7_3 = (t_enc >> 3) & 0x1F;
+ return t_bits_7_3 != 0 ? ((8 + t_bits_2_0) << (t_bits_7_3 - 1)) : t_bits_2_0;
+}
+
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t)
+{
+ uint32_t t_enc = 0;
+
+ if (t > 0) {
+ uint32_t t_dec = 0;
+
+ do {
+ t_enc++;
+ t_dec = hw_mod_flm_scrub_timeout_decode(t_enc);
+ } while (t_enc <= 0xEF && t_dec < t);
+ }
+
+ return t_enc;
+}
+
+static int hw_mod_flm_scrub_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCRUB_PRESET_ALL:
+ if (get)
+ return UNSUP_FIELD;
+
+ memset(&be->flm.v25.scrub[index], (uint8_t)*value,
+ sizeof(struct flm_v25_scrub_s));
+ break;
+
+ case HW_FLM_SCRUB_T:
+ GET_SET(be->flm.v25.scrub[index].t, value);
+ break;
+
+ case HW_FLM_SCRUB_R:
+ GET_SET(be->flm.v25.scrub[index].r, value);
+ break;
+
+ case HW_FLM_SCRUB_DEL:
+ GET_SET(be->flm.v25.scrub[index].del, value);
+ break;
+
+ case HW_FLM_SCRUB_INF:
+ GET_SET(be->flm.v25.scrub[index].inf, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_scrub_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index fbc947ee1d..76bbd57f65 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -13,6 +13,21 @@
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+__rte_always_inline int flm_age_event_get(uint8_t port)
+{
+ return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_set(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_clear(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+}
+
void flm_age_queue_free(uint8_t port, uint16_t caller_id)
{
struct rte_ring *q = NULL;
@@ -88,6 +103,19 @@ struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned
return q;
}
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue full");
+ }
+}
+
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 9ff6ef6de0..27154836c5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -12,6 +12,14 @@ struct flm_age_event_s {
void *context;
};
+/* Indicates why the flow info record was generated */
+#define INF_DATA_CAUSE_SW_UNLEARN 0
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED 1
+#define INF_DATA_CAUSE_NA 2
+#define INF_DATA_CAUSE_PERIODIC_FLOW_INFO 3
+#define INF_DATA_CAUSE_SW_PROBE 4
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT 5
+
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
@@ -20,8 +28,12 @@ struct flm_age_event_s {
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+int flm_age_event_get(uint8_t port);
+void flm_age_event_set(uint8_t port);
+void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 98b0e8347a..db9687714f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -138,6 +138,26 @@ static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
return q;
}
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
+{
+ struct rte_ring **stat_q = remote ? stat_q_remote : stat_q_local;
+
+ if (port >= (remote ? MAX_STAT_RMT_QUEUES : MAX_STAT_LCL_QUEUES))
+ return -1;
+
+ if (stat_q[port] == NULL) {
+ if (flm_evt_queue_create(port, remote ? FLM_STAT_REMOTE : FLM_STAT_LOCAL) == NULL)
+ return -1;
+ }
+
+ if (rte_ring_sp_enqueue_elem(stat_q[port], obj, FLM_STAT_ELEM_SIZE) != 0) {
+ NT_LOG(DBG, FILTER, "FLM local status queue full");
+ return -1;
+ }
+
+ return 0;
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 238be7a3b2..3a61f844b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,5 +48,6 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b5fee67e67..2fee6ae6b5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "rte_common.h"
#define HW_DB_INLINE_ACTION_SET_NB 512
@@ -57,12 +58,18 @@ struct hw_db_inline_resource_db {
int ref;
} *hsh;
+ struct hw_db_inline_resource_db_scrub {
+ struct hw_db_inline_scrub_data data;
+ int ref;
+ } *scrub;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
uint32_t nb_tpe;
uint32_t nb_tpe_ext;
uint32_t nb_hsh;
+ uint32_t nb_scrub;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -255,6 +262,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_scrub = ndev->be.flm.nb_scrub_profiles;
+ db->scrub = calloc(db->nb_scrub, sizeof(struct hw_db_inline_resource_db_scrub));
+
+ if (db->scrub == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
/* Preset data */
@@ -276,6 +291,7 @@ void hw_db_inline_destroy(void *db_handle)
free(db->tpe);
free(db->tpe_ext);
free(db->hsh);
+ free(db->scrub);
free(db->cat);
@@ -366,6 +382,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ hw_db_inline_scrub_deref(ndev, db_handle,
+ *(struct hw_db_flm_scrub_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -410,9 +431,9 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
else
fprintf(file,
- " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d, SCRUB id %d\n",
data->cot.ids, data->qsl.ids, data->slc_lr.ids,
- data->tpe.ids, data->hsh.ids);
+ data->tpe.ids, data->hsh.ids, data->scrub.ids);
break;
}
@@ -577,6 +598,15 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
break;
}
+ case HW_DB_IDX_TYPE_FLM_SCRUB: {
+ const struct hw_db_inline_scrub_data *data = &db->scrub[idxs[i].ids].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " SCRUB %d\n", idxs[i].ids);
+ fprintf(file, " Timeout: %d, encoded timeout: %d\n",
+ hw_mod_flm_scrub_timeout_decode(data->timeout), data->timeout);
+ break;
+ }
+
case HW_DB_IDX_TYPE_HSH: {
const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
fprintf(file, " HSH %d\n", idxs[i].ids);
@@ -690,6 +720,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_HSH:
return &db->hsh[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ return &db->scrub[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -1540,7 +1573,7 @@ static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_
return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
- data1->hsh.raw == data2->hsh.raw;
+ data1->hsh.raw == data2->hsh.raw && data1->scrub.raw == data2->scrub.raw;
}
struct hw_db_action_set_idx
@@ -2849,3 +2882,106 @@ void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->hsh[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* FML SCRUB */
+/******************************************************************************/
+
+static int hw_db_inline_scrub_compare(const struct hw_db_inline_scrub_data *data1,
+ const struct hw_db_inline_scrub_data *data2)
+{
+ return data1->timeout == data2->timeout;
+}
+
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_scrub_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_SCRUB;
+
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, i.e. flow will never AGE-out */
+ if (data->timeout == 0) {
+ idx.ids = 0;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_scrub; ++i) {
+ int ref = db->scrub[i].ref;
+
+ if (ref > 0 && hw_db_inline_scrub_compare(data, &db->scrub[i].data)) {
+ idx.ids = i;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ int res = hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, data->timeout);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_R, idx.ids,
+ NTNIC_SCANNER_TIMEOUT_RESOLUTION);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_DEL, idx.ids, SCRUB_DEL);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_INF, idx.ids, SCRUB_INF);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->scrub[idx.ids].ref = 1;
+ memcpy(&db->scrub[idx.ids].data, data, sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, idx.ids);
+
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ (void)ndev;
+
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->scrub[idx.ids].ref += 1;
+}
+
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->scrub[idx.ids].ref -= 1;
+
+ if (db->scrub[idx.ids].ref <= 0) {
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, which shall not be removed
+ */
+ if (idx.ids > 0) {
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->scrub[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, idx.ids);
+ }
+
+ db->scrub[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a9d31c86ea..c920d36cfd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -117,6 +117,10 @@ struct hw_db_flm_ft {
HW_DB_IDX;
};
+struct hw_db_flm_scrub_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -145,6 +149,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
+ HW_DB_IDX_TYPE_FLM_SCRUB,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -160,6 +165,43 @@ struct hw_db_inline_match_set_data {
uint8_t priority;
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
+ struct hw_db_tpe_idx tpe;
+ struct hw_db_hsh_idx hsh;
+ struct hw_db_flm_scrub_idx scrub;
+ };
+ };
+};
+
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+
+ struct hw_db_action_set_idx action_set;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -232,39 +274,8 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
-struct hw_db_inline_action_set_data {
- int contains_jump;
- union {
- int jump;
- struct {
- struct hw_db_cot_idx cot;
- struct hw_db_qsl_idx qsl;
- struct hw_db_slc_lr_idx slc_lr;
- struct hw_db_tpe_idx tpe;
- struct hw_db_hsh_idx hsh;
- };
- };
-};
-
-struct hw_db_inline_km_rcp_data {
- uint32_t rcp;
-};
-
-struct hw_db_inline_km_ft_data {
- struct hw_db_cat_idx cat;
- struct hw_db_km_idx km;
- struct hw_db_action_set_idx action_set;
-};
-
-struct hw_db_inline_flm_ft_data {
- /* Group zero flows should set jump. */
- /* Group nonzero flows should set group. */
- int is_group_zero;
- union {
- int jump;
- int group;
- };
- struct hw_db_action_set_idx action_set;
+struct hw_db_inline_scrub_data {
+ uint32_t timeout;
};
/**/
@@ -368,6 +379,13 @@ void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct
void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_flm_ft idx);
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data);
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1824c931fe..9e13058718 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,7 @@
#include "hw_mod_backend.h"
#include "flm_age_queue.h"
+#include "flm_evt_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -20,6 +21,13 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define DMA_BLOCK_SIZE 256
+#define DMA_OVERHEAD 20
+#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
+#define MAX_STA_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_STA_DATA)
+#define WORDS_PER_INF_DATA (sizeof(struct flm_v25_inf_data_s) / sizeof(uint32_t))
+#define MAX_INF_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_INF_DATA)
+
#define NT_FLM_MISS_FLOW_TYPE 0
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
@@ -71,14 +79,127 @@ static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
return r.num;
}
+static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
+{
+ if (caller_id < MAX_VDPA_PORTS + 1) {
+ *port = caller_id;
+ return true;
+ }
+
+ *port = caller_id - MAX_VDPA_PORTS - 1;
+ return false;
+}
+
+static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_inf_data_s *inf_data =
+ (struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, inf_data->id, &flm_h, &caller_id,
+ &type);
+
+ /* Check that received record hold valid meter statistics */
+ if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
+
+ age_event.context = fh->context;
+
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
+ break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
+ }
+ }
+ }
+}
+
+static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_sta_data_s *sta_data =
+ (struct flm_v25_sta_data_s *)&data[i * WORDS_PER_STA_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, sta_data->id, &flm_h, &caller_id,
+ &type);
+
+ if (type == 1) {
+ uint8_t port;
+ bool remote_caller = is_remote_caller(caller_id, &port);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+ ((struct flow_handle *)flm_h.p)->learn_ignored = 1;
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ struct flm_status_event_s data = {
+ .flow = flm_h.p,
+ .learn_ignore = sta_data->lis,
+ .learn_failed = sta_data->lfs,
+ };
+
+ flm_sta_queue_put(port, remote_caller, &data);
+ }
+ }
+}
+
static uint32_t flm_update(struct flow_eth_dev *dev)
{
static uint32_t inf_word_cnt;
static uint32_t sta_word_cnt;
+ uint32_t inf_data[DMA_BLOCK_SIZE];
+ uint32_t sta_data[DMA_BLOCK_SIZE];
+
+ if (inf_word_cnt >= WORDS_PER_INF_DATA || sta_word_cnt >= WORDS_PER_STA_DATA) {
+ uint32_t inf_records = inf_word_cnt / WORDS_PER_INF_DATA;
+
+ if (inf_records > MAX_INF_DATA_RECORDS_PER_READ)
+ inf_records = MAX_INF_DATA_RECORDS_PER_READ;
+
+ uint32_t sta_records = sta_word_cnt / WORDS_PER_STA_DATA;
+
+ if (sta_records > MAX_STA_DATA_RECORDS_PER_READ)
+ sta_records = MAX_STA_DATA_RECORDS_PER_READ;
+
+ hw_mod_flm_inf_sta_data_update_get(&dev->ndev->be, HW_FLM_FLOW_INF_STA_DATA,
+ inf_data, inf_records * WORDS_PER_INF_DATA,
+ &inf_word_cnt, sta_data,
+ sta_records * WORDS_PER_STA_DATA,
+ &sta_word_cnt);
+
+ if (inf_records > 0)
+ flm_mtr_read_inf_records(dev, inf_data, inf_records);
+
+ if (sta_records > 0)
+ flm_mtr_read_sta_records(dev, sta_data, sta_records);
+
+ return 1;
+ }
+
if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
return 1;
+ hw_mod_flm_buf_ctrl_update(&dev->ndev->be);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_INF_AVAIL, &inf_word_cnt);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_STA_AVAIL, &sta_word_cnt);
+
return inf_word_cnt + sta_word_cnt;
}
@@ -1067,6 +1188,25 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_AGE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_AGE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_age age_tmp;
+ const struct rte_flow_action_age *age =
+ memcpy_mask_if(&age_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_age));
+ fd->age.timeout = hw_mod_flm_scrub_timeout_encode(age->timeout);
+ fd->age.context = age->context;
+ NT_LOG(DBG, FILTER,
+ "normalized timeout: %u, original timeout: %u, context: %p",
+ hw_mod_flm_scrub_timeout_decode(fd->age.timeout),
+ age->timeout, fd->age.context);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
@@ -2466,6 +2606,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
break;
}
}
+ fh->context = fd->age.context;
}
static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
@@ -2722,6 +2863,21 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup SCRUB profile */
+ struct hw_db_inline_scrub_data scrub_data = { .timeout = fd->age.timeout };
+ struct hw_db_flm_scrub_idx scrub_idx =
+ hw_db_inline_scrub_add(dev->ndev, dev->ndev->hw_db_handle, &scrub_data);
+ local_idxs[(*local_idx_counter)++] = scrub_idx.raw;
+
+ if (scrub_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM SCRUB resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_scrub)
+ *flm_scrub = scrub_idx.ids;
+
/* Setup Action Set */
struct hw_db_inline_action_set_data action_set_data = {
.contains_jump = 0,
@@ -2730,6 +2886,7 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
.slc_lr = slc_lr_idx,
.tpe = tpe_idx,
.hsh = hsh_idx,
+ .scrub = scrub_idx,
};
struct hw_db_action_set_idx action_set_idx =
hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
@@ -2796,6 +2953,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ fh->context = fd->age.context;
nic_insert_flow(dev->ndev, fh);
} else if (attr->group > 0) {
@@ -2852,6 +3010,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
*/
int identical_km_entry_ft = -1;
+ /* Setup Action Set */
+
+ /* SCRUB/AGE action is not supported for group 0 */
+ if (fd->age.timeout != 0 || fd->age.context != NULL) {
+ NT_LOG(ERR, FILTER, "Action AGE is not supported for flow in group 0");
+ flow_nic_set_error(ERR_ACTION_AGE_UNSUPPORTED_GROUP_0, error);
+ goto error_out;
+ }
+
+ /* NOTE: SCRUB record 0 is used by default with timeout 0, i.e. flow will never
+ * AGE-out
+ */
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -3348,6 +3518,15 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+ /* Initialize SCRUB with default index 0, i.e. flow will never AGE-out */
+ if (hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_flm_scrub_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -3483,6 +3662,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
index 8ba8b8f67a..3b53288ddf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -55,4 +55,23 @@
*/
#define NTNIC_SCANNER_LOAD 0.01
-#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
+/*
+ * This define sets the timeout resolution of aged flow scanner (scrubber).
+ *
+ * The timeout resolution feature is provided in order to reduce the number of
+ * write-back operations for flows without attached meter. If the resolution
+ * is disabled (set to 0) and flow timeout is enabled via age action, then a write-back
+ * occurs every the flow is evicted from the flow cache, essentially causing the
+ * lookup performance to drop to that of a flow with meter. By setting the timeout
+ * resolution (>0), write-back for flows happens only when the difference between
+ * the last recorded time for the flow and the current time exceeds the chosen resolution.
+ *
+ * The parameter value is a power of 2 in units of 2^28 nanoseconds. It means that value 8 sets
+ * the timeout resolution to: 2^8 * 2^28 / 1e9 = 68,7 seconds
+ *
+ * NOTE: This parameter has a significant impact on flow lookup performance, especially
+ * if full scanner timeout resolution (=0) is configured.
+ */
+#define NTNIC_SCANNER_TIMEOUT_RESOLUTION 8
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a212b3ab07..e0f455dc1b 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,7 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_age_queue.h"
#include "profile_inline/flm_evt_queue.h"
#include "rte_pmd_ntnic.h"
@@ -1814,6 +1815,21 @@ THREAD_FUNC port_event_thread_fn(void *context)
}
}
+ /* AGED event */
+ /* Note: RTE_FLOW_PORT_FLAG_STRICT_QUEUE flag is not supported so
+ * event is always generated
+ */
+ int aged_event_count = flm_age_event_get(port_no);
+
+ if (aged_event_count > 0 && eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_FLOW_AGED,
+ NULL);
+ flm_age_event_clear(port_no);
+ do_wait = false;
+ }
+
if (do_wait)
nt_os_wait_usec(10);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 69/86] net/ntnic: add termination thread
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (67 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 68/86] net/ntnic: add flow aging event Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 70/86] net/ntnic: add aging documentation Serhii Iliushyk
` (17 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Introduce clear_pdrv to unregister driver
from global tracking.
Modify drv_deinit to call clear_pdirv and ensure
safe termination.
Add flm sta and age event free.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../flow_api/profile_inline/flm_age_queue.c | 10 +++
.../flow_api/profile_inline/flm_age_queue.h | 1 +
.../flow_api/profile_inline/flm_evt_queue.c | 76 +++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 6 ++
5 files changed, 94 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index 76bbd57f65..d916eccec7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -44,6 +44,16 @@ void flm_age_queue_free(uint8_t port, uint16_t caller_id)
rte_ring_free(q);
}
+void flm_age_queue_free_all(void)
+{
+ int i;
+ int j;
+
+ for (i = 0; i < MAX_EVT_AGE_PORTS; i++)
+ for (j = 0; j < MAX_EVT_AGE_QUEUES; j++)
+ flm_age_queue_free(i, j);
+}
+
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
{
char name[20];
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 27154836c5..55c410ac86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -32,6 +32,7 @@ int flm_age_event_get(uint8_t port);
void flm_age_event_set(uint8_t port);
void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+void flm_age_queue_free_all(void);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index db9687714f..761609a0ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -25,6 +25,82 @@ static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
/* Remote queues for flm status records */
static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+static void flm_inf_sta_queue_free(uint8_t port, uint8_t caller)
+{
+ struct rte_ring *q = NULL;
+
+ /* If queues is not created, then ignore and return */
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ q = info_q_local[port];
+ info_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ q = info_q_remote[port];
+ info_q_remote[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port < MAX_STAT_LCL_QUEUES && stat_q_local[port] != NULL) {
+ q = stat_q_local[port];
+ stat_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port < MAX_STAT_RMT_QUEUES && stat_q_remote[port] != NULL) {
+ q = stat_q_remote[port];
+ stat_q_remote[port] = NULL;
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ break;
+ }
+
+ if (q)
+ rte_ring_free(q);
+}
+
+void flm_inf_sta_queue_free_all(uint8_t caller)
+{
+ int count = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ count = MAX_INFO_LCL_QUEUES;
+ break;
+
+ case FLM_INFO_REMOTE:
+ count = MAX_INFO_RMT_QUEUES;
+ break;
+
+ case FLM_STAT_LOCAL:
+ count = MAX_STAT_LCL_QUEUES;
+ break;
+
+ case FLM_STAT_REMOTE:
+ count = MAX_STAT_RMT_QUEUES;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ return;
+ }
+
+ for (int i = 0; i < count; i++)
+ flm_inf_sta_queue_free(i, caller);
+}
static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 3a61f844b6..d61b282472 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -47,6 +47,7 @@ enum {
#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+void flm_inf_sta_queue_free_all(uint8_t caller);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index e0f455dc1b..cdf5c346b7 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1420,6 +1420,12 @@ drv_deinit(struct drv_s *p_drv)
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
THREAD_JOIN(p_nt_drv->port_event_thread);
+ /* Free all local flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_LOCAL);
+ /* Free all remote flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_REMOTE);
+ /* Free all aged flow event queues */
+ flm_age_queue_free_all();
}
/* stop adapter */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 70/86] net/ntnic: add aging documentation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (68 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 69/86] net/ntnic: add termination thread Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-30 1:56 ` Ferruh Yigit
2024-10-29 16:42 ` [PATCH v4 71/86] net/ntnic: add meter API Serhii Iliushyk
` (16 subsequent siblings)
86 siblings, 1 reply; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
ntnic.rst document was exntede with age feature specification.
ntnic.ini was extended with rte_flow action age support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 18 ++++++++++++++++++
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 20 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 947c7ba3a1..af2981ccf6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -33,6 +33,7 @@ udp = Y
vlan = Y
[rte_flow actions]
+age = Y
drop = Y
jump = Y
mark = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e7e1cbcff7..e5a8d71892 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -148,3 +148,21 @@ FILTER
To enable logging on all levels use wildcard in the following way::
--log-level=pmd.net.ntnic.*,8
+
+Flow Scanner
+------------
+
+Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
+When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
+``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
+
+Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
+
+There are list of characteristics that ``age timeout`` action has:
+ - functions only in group > 0;
+ - flow timeout is specified in seconds;
+ - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
+ - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
+ - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
+ - after flow is aged-out it's not automatically deleted;
+ - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 75769d1992..b449b01dc8 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -162,6 +162,7 @@ New Features
* Added basic handling of the virtual queues.
* Added flow handling API
* Added statistics API
+ * Added age rte flow action support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 71/86] net/ntnic: add meter API
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (69 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 70/86] net/ntnic: add aging documentation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 72/86] net/ntnic: add meter module Serhii Iliushyk
` (15 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add meter API and implementation to the profile inline.
management functions were extended with meter flow support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 +
.../flow_api/profile_inline/flm_evt_queue.c | 21 +
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 560 +++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 27 +
6 files changed, 597 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 89f071d982..032063712a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -100,6 +100,7 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *flm_mtr_handle;
void *group_handle;
void *hw_db_handle;
void *id_table_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index c75e7cff83..b40a27fbf1 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -57,6 +57,7 @@ enum res_type_e {
#define MAX_TCAM_START_OFFSETS 4
+#define MAX_FLM_MTRS_SUPPORTED 4
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
@@ -223,6 +224,8 @@ struct nic_flow_def {
uint32_t jump_to_group;
+ uint32_t mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
int full_offload;
/*
@@ -320,6 +323,8 @@ struct flow_handle {
uint32_t flm_db_idx_counter;
uint32_t flm_db_idxs[RES_COUNT];
+ uint32_t flm_mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
uint32_t flm_data[10];
uint8_t flm_prot;
uint8_t flm_kid;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 761609a0ea..d76c7da568 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -234,6 +234,27 @@ int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
return 0;
}
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_local[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM local info queue full");
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_remote[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM remote info queue full");
+ }
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index d61b282472..ee8175cf25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,6 +48,7 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
void flm_inf_sta_queue_free_all(uint8_t caller);
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9e13058718..88b716d836 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define FLM_MTR_PROFILE_SIZE 0x100000
+#define FLM_MTR_STAT_SIZE 0x1000000
+#define UINT64_MSB ((uint64_t)1 << 63)
+
#define DMA_BLOCK_SIZE 256
#define DMA_OVERHEAD 20
#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
@@ -46,8 +50,336 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
+#define POLICING_PARAMETER_OFFSET 4096
+#define SIZE_CONVERTER 1099.511627776
+
+struct flm_mtr_stat_s {
+ struct dual_buckets_s *buckets;
+ atomic_uint_fast64_t n_pkt;
+ atomic_uint_fast64_t n_bytes;
+ uint64_t n_pkt_base;
+ uint64_t n_bytes_base;
+ atomic_uint_fast64_t stats_mask;
+ uint32_t flm_id;
+};
+
+struct flm_mtr_shared_stats_s {
+ struct flm_mtr_stat_s *stats;
+ uint32_t size;
+ int shared;
+};
+
+struct flm_flow_mtr_handle_s {
+ struct dual_buckets_s {
+ uint16_t rate_a;
+ uint16_t rate_b;
+ uint16_t size_a;
+ uint16_t size_b;
+ } dual_buckets[FLM_MTR_PROFILE_SIZE];
+
+ struct flm_mtr_shared_stats_s *port_stats[UINT8_MAX];
+};
+
static void *flm_lrn_queue_arr;
+static int flow_mtr_supported(struct flow_eth_dev *dev)
+{
+ return hw_mod_flm_present(&dev->ndev->be) && dev->ndev->be.flm.nb_variant == 2;
+}
+
+static uint64_t flow_mtr_meter_policy_n_max(void)
+{
+ return FLM_MTR_PROFILE_SIZE;
+}
+
+static inline uint64_t convert_policing_parameter(uint64_t value)
+{
+ uint64_t limit = POLICING_PARAMETER_OFFSET;
+ uint64_t shift = 0;
+ uint64_t res = value;
+
+ while (shift < 15 && value >= limit) {
+ limit <<= 1;
+ ++shift;
+ }
+
+ if (shift != 0) {
+ uint64_t tmp = POLICING_PARAMETER_OFFSET * (1 << (shift - 1));
+
+ if (tmp > value) {
+ res = 0;
+
+ } else {
+ tmp = value - tmp;
+ res = tmp >> (shift - 1);
+ }
+
+ if (res >= POLICING_PARAMETER_OFFSET)
+ res = POLICING_PARAMETER_OFFSET - 1;
+
+ res = res | (shift << 12);
+ }
+
+ return res;
+}
+
+static int flow_mtr_set_profile(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a, uint64_t bucket_rate_b,
+ uint64_t bucket_size_b)
+{
+ struct flow_nic_dev *ndev = dev->ndev;
+ struct flm_flow_mtr_handle_s *handle =
+ (struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle;
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ /* Round rates up to nearest 128 bytes/sec and shift to 128 bytes/sec units */
+ bucket_rate_a = (bucket_rate_a + 127) >> 7;
+ bucket_rate_b = (bucket_rate_b + 127) >> 7;
+
+ buckets->rate_a = convert_policing_parameter(bucket_rate_a);
+ buckets->rate_b = convert_policing_parameter(bucket_rate_b);
+
+ /* Round size down to 38-bit int */
+ if (bucket_size_a > 0x3fffffffff)
+ bucket_size_a = 0x3fffffffff;
+
+ if (bucket_size_b > 0x3fffffffff)
+ bucket_size_b = 0x3fffffffff;
+
+ /* Convert size to units of 2^40 / 10^9. Output is a 28-bit int. */
+ bucket_size_a = bucket_size_a / SIZE_CONVERTER;
+ bucket_size_b = bucket_size_b / SIZE_CONVERTER;
+
+ buckets->size_a = convert_policing_parameter(bucket_size_a);
+ buckets->size_b = convert_policing_parameter(bucket_size_b);
+
+ return 0;
+}
+
+static int flow_mtr_set_policy(struct flow_eth_dev *dev, uint32_t policy_id, int drop)
+{
+ (void)dev;
+ (void)policy_id;
+ (void)drop;
+ return 0;
+}
+
+static uint32_t flow_mtr_meters_supported(struct flow_eth_dev *dev, uint8_t caller_id)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ return handle->port_stats[caller_id]->size;
+}
+
+static int flow_mtr_create_meter(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t mtr_id,
+ uint32_t profile_id,
+ uint32_t policy_id,
+ uint64_t stats_mask)
+{
+ (void)policy_id;
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ union flm_handles flm_h;
+ flm_h.idx = mtr_id;
+ uint32_t flm_id = ntnic_id_table_get_id(dev->ndev->id_table_handle, flm_h, caller_id, 2);
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = buckets->rate_a;
+ learn_record->size = buckets->size_a;
+ learn_record->fill = buckets->size_a;
+
+ learn_record->ft_mbr =
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE; /* FT to assign if MBR has been exceeded */
+
+ learn_record->ent = 1;
+ learn_record->op = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ if (stats_mask)
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ mtr_stat[mtr_id].buckets = buckets;
+ mtr_stat[mtr_id].flm_id = flm_id;
+ atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 3;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 0;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ /* Clear statistics so stats_mask prevents updates of counters on deleted meters */
+ atomic_store(&mtr_stat[mtr_id].stats_mask, 0);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, 0);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, 0);
+ mtr_stat[mtr_id].n_bytes_base = 0;
+ mtr_stat[mtr_id].n_pkt_base = 0;
+ mtr_stat[mtr_id].buckets = NULL;
+
+ ntnic_id_table_free_id(dev->ndev->id_table_handle, flm_id);
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = &handle->port_stats[caller_id]->stats[mtr_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = mtr_stat->flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = mtr_stat->buckets->rate_a;
+ learn_record->size = mtr_stat->buckets->size_a;
+ learn_record->adj = adjust_value;
+
+ learn_record->ft_mbr = NT_FLM_VIOLATING_MBR_FLOW_TYPE;
+
+ learn_record->ent = 1;
+ learn_record->op = 2;
+ learn_record->eor = 1;
+
+ if (atomic_load(&mtr_stat->stats_mask))
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
static void flm_setup_queues(void)
{
flm_lrn_queue_arr = flm_lrn_queue_create();
@@ -92,6 +424,8 @@ static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
for (uint32_t i = 0; i < records; ++i) {
struct flm_v25_inf_data_s *inf_data =
(struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
@@ -102,29 +436,62 @@ static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, u
&type);
/* Check that received record hold valid meter statistics */
- if (type == 1) {
- switch (inf_data->cause) {
- case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
- case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
- struct flow_handle *fh = (struct flow_handle *)flm_h.p;
- struct flm_age_event_s age_event;
- uint8_t port;
+ if (type == 2) {
+ uint64_t mtr_id = flm_h.idx;
+
+ if (mtr_id < handle->port_stats[caller_id]->size) {
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[caller_id]->stats;
+
+ /* Don't update a deleted meter */
+ uint64_t stats_mask = atomic_load(&mtr_stat[mtr_id].stats_mask);
+
+ if (stats_mask) {
+ atomic_store(&mtr_stat[mtr_id].n_pkt,
+ inf_data->packets | UINT64_MSB);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, inf_data->bytes);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, inf_data->packets);
+ struct flm_info_event_s stat_data;
+ bool remote_caller;
+ uint8_t port;
+
+ remote_caller = is_remote_caller(caller_id, &port);
+
+ /* Save stat data to flm stat queue */
+ stat_data.bytes = inf_data->bytes;
+ stat_data.packets = inf_data->packets;
+ stat_data.id = mtr_id;
+ stat_data.timestamp = inf_data->ts;
+ stat_data.cause = inf_data->cause;
+ flm_inf_queue_put(port, remote_caller, &stat_data);
+ }
+ }
- age_event.context = fh->context;
+ /* Check that received record hold valid flow data */
- is_remote_caller(caller_id, &port);
+ } else if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
- flm_age_queue_put(caller_id, &age_event);
- flm_age_event_set(port);
- }
- break;
+ age_event.context = fh->context;
- case INF_DATA_CAUSE_SW_UNLEARN:
- case INF_DATA_CAUSE_NA:
- case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
- case INF_DATA_CAUSE_SW_PROBE:
- default:
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
}
}
}
@@ -203,6 +570,42 @@ static uint32_t flm_update(struct flow_eth_dev *dev)
return inf_word_cnt + sta_word_cnt;
}
+static void flm_mtr_read_stats(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ *stats_mask = atomic_load(&mtr_stat[id].stats_mask);
+
+ if (*stats_mask) {
+ uint64_t pkt_1;
+ uint64_t pkt_2;
+ uint64_t nb;
+
+ do {
+ do {
+ pkt_1 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 & UINT64_MSB);
+
+ nb = atomic_load(&mtr_stat[id].n_bytes);
+ pkt_2 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 != pkt_2);
+
+ *green_pkt = pkt_1 - mtr_stat[id].n_pkt_base;
+ *green_bytes = nb - mtr_stat[id].n_bytes_base;
+
+ if (clear) {
+ mtr_stat[id].n_pkt_base = pkt_1;
+ mtr_stat[id].n_bytes_base = nb;
+ }
+ }
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -492,6 +895,8 @@ static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
fd->mark = UINT32_MAX;
fd->jump_to_group = UINT32_MAX;
+ memset(fd->mtr_ids, 0xff, sizeof(uint32_t) * MAX_FLM_MTRS_SUPPORTED);
+
fd->l2_prot = -1;
fd->l3_prot = -1;
fd->l4_prot = -1;
@@ -587,9 +992,17 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->sw9 = fh->flm_data[0];
learn_record->prot = fh->flm_prot;
+ learn_record->mbr_idx1 = fh->flm_mtr_ids[0];
+ learn_record->mbr_idx2 = fh->flm_mtr_ids[1];
+ learn_record->mbr_idx3 = fh->flm_mtr_ids[2];
+ learn_record->mbr_idx4 = fh->flm_mtr_ids[3];
+
/* Last non-zero mtr is used for statistics */
uint8_t mbrs = 0;
+ while (mbrs < MAX_FLM_MTRS_SUPPORTED && fh->flm_mtr_ids[mbrs] != 0)
+ ++mbrs;
+
learn_record->vol_idx = mbrs;
learn_record->nat_ip = fh->flm_nat_ipv4;
@@ -628,6 +1041,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
uint32_t *num_dest_port,
uint32_t *num_queues)
{
+ int mtr_count = 0;
+
unsigned int encap_decap_order = 0;
uint64_t modify_field_use_flags = 0x0;
@@ -813,6 +1228,29 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_METER:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_METER", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_meter meter_tmp;
+ const struct rte_flow_action_meter *meter =
+ memcpy_mask_if(&meter_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_meter));
+
+ if (mtr_count >= MAX_FLM_MTRS_SUPPORTED) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Number of METER actions exceeds %d.",
+ MAX_FLM_MTRS_SUPPORTED);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->mtr_ids[mtr_count++] = meter->mtr_id;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
@@ -2529,6 +2967,13 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = fh->dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[fh->caller_id]->stats;
+ fh->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
switch (fd->l4_prot) {
case PROT_L4_TCP:
fh->flm_prot = 6;
@@ -3598,6 +4043,29 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (ndev->id_table_handle == NULL)
goto err_exit0;
+ ndev->flm_mtr_handle = calloc(1, sizeof(struct flm_flow_mtr_handle_s));
+ struct flm_mtr_shared_stats_s *flm_shared_stats =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *flm_stats =
+ calloc(FLM_MTR_STAT_SIZE, sizeof(struct flm_mtr_stat_s));
+
+ if (ndev->flm_mtr_handle == NULL || flm_shared_stats == NULL ||
+ flm_stats == NULL) {
+ free(ndev->flm_mtr_handle);
+ free(flm_shared_stats);
+ free(flm_stats);
+ goto err_exit0;
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ ((struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle)->port_stats[i] =
+ flm_shared_stats;
+ }
+
+ flm_shared_stats->stats = flm_stats;
+ flm_shared_stats->size = FLM_MTR_STAT_SIZE;
+ flm_shared_stats->shared = UINT8_MAX;
+
if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
goto err_exit0;
@@ -3632,6 +4100,18 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ free(ndev->flm_mtr_handle);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
@@ -4755,6 +5235,11 @@ int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
port_info->max_nb_aging_objects = dev->nb_aging_objects;
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle)
+ port_info->max_nb_meters = mtr_handle->port_stats[caller_id]->size;
+
return res;
}
@@ -4786,6 +5271,35 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
dev->nb_aging_objects = port_attr->nb_aging_objects;
}
+ if (port_attr->nb_meters > 0) {
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle->port_stats[caller_id]->shared == 1) {
+ res = realloc(mtr_handle->port_stats[caller_id]->stats,
+ port_attr->nb_meters) == NULL
+ ? -1
+ : 0;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+
+ } else {
+ mtr_handle->port_stats[caller_id] =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *stats =
+ calloc(port_attr->nb_meters, sizeof(struct flm_mtr_stat_s));
+
+ if (mtr_handle->port_stats[caller_id] == NULL || stats == NULL) {
+ free(mtr_handle->port_stats[caller_id]);
+ free(stats);
+ error->message = "Failed to allocate meter actions";
+ goto error_out;
+ }
+
+ mtr_handle->port_stats[caller_id]->stats = stats;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+ mtr_handle->port_stats[caller_id]->shared = 1;
+ }
+ }
+
return res;
error_out:
@@ -4825,8 +5339,18 @@ static const struct profile_inline_ops ops = {
/*
* NT Flow FLM Meter API
*/
+ .flow_mtr_supported = flow_mtr_supported,
+ .flow_mtr_meter_policy_n_max = flow_mtr_meter_policy_n_max,
+ .flow_mtr_set_profile = flow_mtr_set_profile,
+ .flow_mtr_set_policy = flow_mtr_set_policy,
+ .flow_mtr_create_meter = flow_mtr_create_meter,
+ .flow_mtr_probe_meter = flow_mtr_probe_meter,
+ .flow_mtr_destroy_meter = flow_mtr_destroy_meter,
+ .flm_mtr_adjust_stats = flm_mtr_adjust_stats,
+ .flow_mtr_meters_supported = flow_mtr_meters_supported,
.flm_setup_queues = flm_setup_queues,
.flm_free_queues = flm_free_queues,
+ .flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 15da911ca7..1e9dcd549f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -308,6 +308,33 @@ struct profile_inline_ops {
*/
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
+
+ /*
+ * NT Flow FLM Meter API
+ */
+ int (*flow_mtr_supported)(struct flow_eth_dev *dev);
+ uint64_t (*flow_mtr_meter_policy_n_max)(void);
+ int (*flow_mtr_set_profile)(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a,
+ uint64_t bucket_rate_b, uint64_t bucket_size_b);
+ int (*flow_mtr_set_policy)(struct flow_eth_dev *dev, uint32_t policy_id, int drop);
+ int (*flow_mtr_create_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t profile_id, uint32_t policy_id, uint64_t stats_mask);
+ int (*flow_mtr_probe_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id);
+ int (*flow_mtr_destroy_meter)(struct flow_eth_dev *dev, uint8_t caller_id,
+ uint32_t mtr_id);
+ int (*flm_mtr_adjust_stats)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value);
+ uint32_t (*flow_mtr_meters_supported)(struct flow_eth_dev *dev, uint8_t caller_id);
+
+ void (*flm_mtr_read_stats)(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear);
+
uint32_t (*flm_update)(struct flow_eth_dev *dev);
int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 72/86] net/ntnic: add meter module
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (70 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 71/86] net/ntnic: add meter API Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 73/86] net/ntnic: update meter documentation Serhii Iliushyk
` (14 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Meter module was added:
1. add/remove profile
2. create/destroy flow
3. add/remove meter policy
4. read/update stats
eth_dev_ops struct was extended with ops above.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/ntos_drv.h | 14 +
drivers/net/ntnic/meson.build | 2 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 11 +-
drivers/net/ntnic/ntnic_mod_reg.c | 21 +
drivers/net/ntnic/ntnic_mod_reg.h | 12 +
6 files changed, 542 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 7b3c8ff3d6..f6ce442d17 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -12,6 +12,7 @@
#include <inttypes.h>
#include <rte_ether.h>
+#include "rte_mtr.h"
#include "stream_binary_flow_api.h"
#include "nthw_drv.h"
@@ -90,6 +91,19 @@ struct __rte_cache_aligned ntnic_tx_queue {
enum fpga_info_profile profile; /* Inline / Capture */
};
+struct nt_mtr_profile {
+ LIST_ENTRY(nt_mtr_profile) next;
+ uint32_t profile_id;
+ struct rte_mtr_meter_profile profile;
+};
+
+struct nt_mtr {
+ LIST_ENTRY(nt_mtr) next;
+ uint32_t mtr_id;
+ int shared;
+ struct nt_mtr_profile *profile;
+};
+
struct pmd_internals {
const struct rte_pci_device *pci_dev;
struct flow_eth_dev *flw_dev;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 8c6d02a5ec..ca46541ef3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -17,6 +17,7 @@ includes = [
include_directories('nthw'),
include_directories('nthw/supported'),
include_directories('nthw/model'),
+ include_directories('nthw/ntnic_meter'),
include_directories('nthw/flow_filter'),
include_directories('nthw/flow_api'),
include_directories('nim/'),
@@ -92,6 +93,7 @@ sources = files(
'nthw/flow_filter/flow_nthw_tx_cpy.c',
'nthw/flow_filter/flow_nthw_tx_ins.c',
'nthw/flow_filter/flow_nthw_tx_rpl.c',
+ 'nthw/ntnic_meter/ntnic_meter.c',
'nthw/model/nthw_fpga_model.c',
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
new file mode 100644
index 0000000000..e4e8fe0c7d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -0,0 +1,483 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_meter.h>
+#include <rte_mtr.h>
+#include <rte_mtr_driver.h>
+#include <rte_malloc.h>
+
+#include "ntos_drv.h"
+#include "ntlog.h"
+#include "nt_util.h"
+#include "ntos_system.h"
+#include "ntnic_mod_reg.h"
+
+static inline uint8_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + (uint8_t)(port & 0x7f) + 1;
+}
+
+struct qos_integer_fractional {
+ uint32_t integer;
+ uint32_t fractional; /* 1/1024 */
+};
+
+/*
+ * Inline FLM metering
+ */
+
+static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
+ struct rte_mtr_capabilities *cap,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (!profile_inline_ops->flow_mtr_supported(internals->flw_dev)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Ethernet device does not support metering");
+ }
+
+ memset(cap, 0x0, sizeof(struct rte_mtr_capabilities));
+
+ /* MBR records use 28-bit integers */
+ cap->n_max = profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id);
+ cap->n_shared_max = cap->n_max;
+
+ cap->identical = 0;
+ cap->shared_identical = 0;
+
+ cap->shared_n_flows_per_mtr_max = UINT32_MAX;
+
+ /* Limited by number of MBR record ids per FLM learn record */
+ cap->chaining_n_mtrs_per_flow_max = 4;
+
+ cap->chaining_use_prev_mtr_color_supported = 0;
+ cap->chaining_use_prev_mtr_color_enforced = 0;
+
+ cap->meter_rate_max = (uint64_t)(0xfff << 0xf) * 1099;
+
+ cap->stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ /* Only color-blind mode is supported */
+ cap->color_aware_srtcm_rfc2697_supported = 0;
+ cap->color_aware_trtcm_rfc2698_supported = 0;
+ cap->color_aware_trtcm_rfc4115_supported = 0;
+
+ /* Focused on RFC2698 for now */
+ cap->meter_srtcm_rfc2697_n_max = 0;
+ cap->meter_trtcm_rfc2698_n_max = cap->n_max;
+ cap->meter_trtcm_rfc4115_n_max = 0;
+
+ cap->meter_policy_n_max = profile_inline_ops->flow_mtr_meter_policy_n_max();
+
+ /* Byte mode is supported */
+ cap->srtcm_rfc2697_byte_mode_supported = 0;
+ cap->trtcm_rfc2698_byte_mode_supported = 1;
+ cap->trtcm_rfc4115_byte_mode_supported = 0;
+
+ /* Packet mode not supported */
+ cap->srtcm_rfc2697_packet_mode_supported = 0;
+ cap->trtcm_rfc2698_packet_mode_supported = 0;
+ cap->trtcm_rfc4115_packet_mode_supported = 0;
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (profile->packet_mode != 0) {
+ return -rte_mtr_error_set(error, EINVAL,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_PACKET_MODE, NULL,
+ "Profile packet mode not supported");
+ }
+
+ if (profile->alg == RTE_MTR_SRTCM_RFC2697) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 2697 not supported");
+ }
+
+ if (profile->alg == RTE_MTR_TRTCM_RFC4115) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 4115 not supported");
+ }
+
+ if (profile->trtcm_rfc2698.cir != profile->trtcm_rfc2698.pir ||
+ profile->trtcm_rfc2698.cbs != profile->trtcm_rfc2698.pbs) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile committed and peak rates must be equal");
+ }
+
+ int res = profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id,
+ profile->trtcm_rfc2698.cir,
+ profile->trtcm_rfc2698.cbs, 0, 0);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile could not be added.");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id, 0, 0, 0, 0);
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t policy_id,
+ struct rte_mtr_meter_policy_params *policy,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ const struct rte_flow_action *actions = policy->actions[RTE_COLOR_GREEN];
+ int green_action_supported = (actions[0].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_VOID &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END);
+
+ actions = policy->actions[RTE_COLOR_YELLOW];
+ int yellow_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ actions = policy->actions[RTE_COLOR_RED];
+ int red_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ if (green_action_supported == 0 || yellow_action_supported == 0 ||
+ red_action_supported == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Unsupported meter policy actions");
+ }
+
+ if (profile_inline_ops->flow_mtr_set_policy(internals->flw_dev, policy_id, 1)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Policy could not be added");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_delete_inline(struct rte_eth_dev *eth_dev __rte_unused,
+ uint32_t policy_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ return 0;
+}
+
+static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (params->use_prev_mtr_color != 0 || params->dscp_table != NULL) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only color blind mode is supported");
+ }
+
+ uint64_t allowed_stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ if ((params->stats_mask & ~allowed_stats_mask) != 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Requested color stats not supported");
+ }
+
+ if (params->meter_enable == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Disabled meters not supported");
+ }
+
+ if (shared == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only shared mtrs are supported");
+ }
+
+ if (params->meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (params->meter_policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ int res = profile_inline_ops->flow_mtr_create_meter(internals->flw_dev,
+ caller_id,
+ mtr_id,
+ params->meter_profile_id,
+ params->meter_policy_id,
+ params->stats_mask);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_destroy_meter(internals->flw_dev, caller_id, mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ uint64_t adjust_value,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ const uint64_t adjust_bit = 1ULL << 63;
+ const uint64_t probe_bit = 1ULL << 62;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (adjust_value & adjust_bit) {
+ adjust_value &= adjust_bit - 1;
+
+ if (adjust_value > (uint64_t)UINT32_MAX) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "Adjust value is out of range");
+ }
+
+ if (profile_inline_ops->flm_mtr_adjust_stats(internals->flw_dev, caller_id, mtr_id,
+ (uint32_t)adjust_value)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to adjust offloaded MTR");
+ }
+
+ return 0;
+ }
+
+ if (adjust_value & probe_bit) {
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_probe_meter(internals->flw_dev, caller_id,
+ mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to offload to hardware");
+ }
+
+ return 0;
+ }
+
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Use of meter stats update requires bit 63 or bit 62 of \"stats_mask\" must be 1.");
+}
+
+static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ memset(stats, 0x0, sizeof(struct rte_mtr_stats));
+ profile_inline_ops->flm_mtr_read_stats(internals->flw_dev, caller_id, mtr_id, stats_mask,
+ &stats->n_pkts[RTE_COLOR_GREEN],
+ &stats->n_bytes[RTE_COLOR_GREEN], clear);
+
+ return 0;
+}
+
+/*
+ * Ops setup
+ */
+
+static const struct rte_mtr_ops mtr_ops_inline = {
+ .capabilities_get = eth_mtr_capabilities_get_inline,
+ .meter_profile_add = eth_mtr_meter_profile_add_inline,
+ .meter_profile_delete = eth_mtr_meter_profile_delete_inline,
+ .create = eth_mtr_create_inline,
+ .destroy = eth_mtr_destroy_inline,
+ .meter_policy_add = eth_mtr_meter_policy_add_inline,
+ .meter_policy_delete = eth_mtr_meter_policy_delete_inline,
+ .stats_update = eth_mtr_stats_adjust_inline,
+ .stats_read = eth_mtr_stats_read_inline,
+};
+
+static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
+ enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
+
+ switch (profile) {
+ case FPGA_INFO_PROFILE_INLINE:
+ *(const struct rte_mtr_ops **)ops = &mtr_ops_inline;
+ break;
+
+ case FPGA_INFO_PROFILE_UNKNOWN:
+
+ /* fallthrough */
+ case FPGA_INFO_PROFILE_CAPTURE:
+
+ /* fallthrough */
+ default:
+ NT_LOG(ERR, NTHW, "" PCIIDENT_PRINT_STR ": fpga profile not supported",
+ PCIIDENT_TO_DOMAIN(p_nt_drv->pciident),
+ PCIIDENT_TO_BUSNR(p_nt_drv->pciident),
+ PCIIDENT_TO_DEVNR(p_nt_drv->pciident),
+ PCIIDENT_TO_FUNCNR(p_nt_drv->pciident));
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct meter_ops_s meter_ops = {
+ .eth_mtr_ops_get = eth_mtr_ops_get,
+};
+
+void meter_init(void)
+{
+ NT_LOG(DBG, NTNIC, "Meter ops initialized");
+ register_meter_ops(&meter_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index cdf5c346b7..df9ee77e06 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1682,7 +1682,7 @@ static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_con
return 0;
}
-static const struct eth_dev_ops nthw_eth_dev_ops = {
+static struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
.dev_stop = eth_dev_stop,
@@ -1705,6 +1705,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .mtr_ops_get = NULL,
.flow_ops_get = dev_flow_ops_get,
.xstats_get = eth_xstats_get,
.xstats_get_names = eth_xstats_get_names,
@@ -2168,6 +2169,14 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ const struct meter_ops_s *meter_ops = get_meter_ops();
+
+ if (meter_ops != NULL)
+ nthw_eth_dev_ops.mtr_ops_get = meter_ops->eth_mtr_ops_get;
+
+ else
+ NT_LOG(DBG, NTNIC, "Meter module is not initialized");
+
/* Initialize the queue system */
if (err == 0) {
sg_ops = get_sg_ops();
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 6737d18a6f..10aa778a57 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,27 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+/*
+ *
+ */
+static struct meter_ops_s *meter_ops;
+
+void register_meter_ops(struct meter_ops_s *ops)
+{
+ meter_ops = ops;
+}
+
+const struct meter_ops_s *get_meter_ops(void)
+{
+ if (meter_ops == NULL)
+ meter_init();
+
+ return meter_ops;
+}
+
+/*
+ *
+ */
static const struct ntnic_filter_ops *ntnic_filter_ops;
void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1e9dcd549f..3fbbee6490 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -115,6 +115,18 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+/* Meter ops section */
+struct meter_ops_s {
+ int (*eth_mtr_ops_get)(struct rte_eth_dev *eth_dev, void *ops);
+};
+
+void register_meter_ops(struct meter_ops_s *ops);
+const struct meter_ops_s *get_meter_ops(void);
+void meter_init(void);
+
+/*
+ *
+ */
struct ntnic_filter_ops {
int (*poll_statistics)(struct pmd_internals *internals);
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 73/86] net/ntnic: update meter documentation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (71 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 72/86] net/ntnic: add meter module Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 74/86] net/ntnic: add action update Serhii Iliushyk
` (13 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Update ntnic.ini ntnic.rst and release notes
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 ++
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
3 files changed, 4 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index af2981ccf6..474754dc90 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -43,3 +43,5 @@ queue = Y
raw_decap = Y
raw_encap = Y
rss = Y
+meter = Y
+passthru = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e5a8d71892..4ae94b161c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -70,6 +70,7 @@ Features
- Exact match of 140 million flows and policies.
- Basic stats
- Extended stats
+- Flow metering, including meter policy API.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index b449b01dc8..1124d5a64c 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -163,6 +163,7 @@ New Features
* Added flow handling API
* Added statistics API
* Added age rte flow action support
+ * Added meter flow metering and flow policy support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 74/86] net/ntnic: add action update
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (72 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 73/86] net/ntnic: update meter documentation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 75/86] net/ntnic: add flow " Serhii Iliushyk
` (12 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
rte_flow_ops was extended with action update feature.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 10 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 6d65ffd38f..8edaccb65c 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -9,6 +9,7 @@
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
#include "ntos_drv.h"
+#include "rte_flow.h"
#define MAX_RTE_FLOWS 8192
@@ -703,6 +704,70 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
return res;
}
+static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+ int res = -1;
+
+ if (internals->flw_dev) {
+ struct pmd_internals *dev_private =
+ (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &dev_private->p_drv->ntdrv.adapter_info.fpga_info;
+ struct cnv_action_s action = { 0 };
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (dev_private->type == PORT_TYPE_OVERRIDE &&
+ dev_private->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = dev_private->vpq[0].id;
+ }
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS,
+ queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+ }
+
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_actions_update(internals->flw_dev,
+ (void *)flow,
+ action.flow_actions,
+ &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_actions_update(internals->flw_dev,
+ flow->flw_hdl,
+ action.flow_actions,
+ &flow_error);
+ }
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -941,6 +1006,7 @@ static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
+ .actions_update = eth_flow_actions_update,
.dev_dump = eth_flow_dev_dump,
.get_aged_flows = eth_flow_get_aged_flows,
.info_get = eth_flow_info_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 3fbbee6490..563e62ebce 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -292,6 +292,11 @@ struct profile_inline_ops {
uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_actions_update_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -401,6 +406,11 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_actions_update)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
/*
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 75/86] net/ntnic: add flow action update
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (73 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 74/86] net/ntnic: add action update Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 76/86] net/ntnic: flow update was added Serhii Iliushyk
` (11 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flow_filter_ops was extended with flow action update API.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 16 ++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 5 +++++
2 files changed, 21 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 5349dc84ab..9164f4cc2e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -266,6 +266,21 @@ static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_f
return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
}
+static int flow_actions_update(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_actions_update_profile_inline(dev, flow, action, error);
+}
+
/*
* Device Management API
*/
@@ -1127,6 +1142,7 @@ static const struct flow_filter_ops ops = {
.flow_create = flow_create,
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
+ .flow_actions_update = flow_actions_update,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
.flow_get_aged_flows = flow_get_aged_flows,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index ea1d9c31b2..8a03be1ab7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -42,6 +42,11 @@ int flow_flush_profile_inline(struct flow_eth_dev *dev,
uint16_t caller_id,
struct rte_flow_error *error);
+int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 76/86] net/ntnic: flow update was added
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (74 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 75/86] net/ntnic: add flow " Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 77/86] net/ntnic: update documentation for flow actions update Serhii Iliushyk
` (10 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Flow action update was implemented.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_profile_inline.c | 165 ++++++++++++++++++
1 file changed, 165 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 88b716d836..98bfc01539 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -15,6 +15,7 @@
#include "flow_api_hw_db_inline.h"
#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
+#include "rte_flow.h"
#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
@@ -36,6 +37,7 @@
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_OP_RELEARN 2
#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
#define NT_VIOLATING_MBR_CFN 0
@@ -4385,6 +4387,168 @@ int flow_flush_profile_inline(struct flow_eth_dev *dev,
return err;
}
+int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(flow);
+
+ uint32_t num_dest_port = 0;
+ uint32_t num_queues = 0;
+
+ int group = (int)flow->flm_kid - 2;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow->type != FLOW_HANDLE_TYPE_FLM) {
+ NT_LOG(ERR, FILTER,
+ "Flow actions update not supported for group 0 or default flows");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate nic_flow_def";
+ return -1;
+ }
+
+ fd->non_empty = 1;
+
+ int res =
+ interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res) {
+ free(fd);
+ return -1;
+ }
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Setup new actions */
+ uint32_t local_idx_counter = 0;
+ uint32_t local_idxs[RES_COUNT];
+ memset(local_idxs, 0x0, sizeof(uint32_t) * RES_COUNT);
+
+ struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
+
+ struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
+
+ {
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ /* Setup FLM RCP */
+ const struct hw_db_inline_flm_rcp_data *flm_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_FLM_RCP,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter);
+
+ if (flm_data == NULL) {
+ NT_LOG(ERR, FILTER, "Could not retrieve FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ goto error_out;
+ }
+
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, flm_data, group);
+
+ local_idxs[local_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, group, local_idxs,
+ &local_idx_counter, &flow->flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Update flow_handle */
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[flow->caller_id]->stats;
+ flow->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+
+ /* fallthrough */
+ case CPY_SELECT_DSCP_IPV6:
+ flow->flm_dscp = fd->modify_field[i].value8[0];
+ break;
+
+ case CPY_SELECT_RQI_QFI:
+ flow->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ flow->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ flow->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ flow->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ flow->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+
+ flow->flm_ft = (uint8_t)flm_ft;
+ flow->flm_scrub_prof = (uint8_t)flm_scrub;
+ flow->context = fd->age.context;
+
+ /* Program flow */
+ flm_flow_programming(flow, NT_FLM_OP_RELEARN);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter);
+ memset(flow->flm_db_idxs, 0x0, sizeof(struct hw_db_idx) * RES_COUNT);
+
+ flow->flm_db_idx_counter = local_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ flow->flm_db_idxs[i] = local_idxs[i];
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ free(fd);
+ return 0;
+
+error_out:
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle, (struct hw_db_idx *)local_idxs,
+ local_idx_counter);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ free(fd);
+ return -1;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -5328,6 +5492,7 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
+ .flow_actions_update_profile_inline = flow_actions_update_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
.flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
/*
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 77/86] net/ntnic: update documentation for flow actions update
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (75 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 76/86] net/ntnic: flow update was added Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 78/86] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
` (9 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update ntnic.rst and release notes
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
2 files changed, 2 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 4ae94b161c..e0dfbefacb 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -71,6 +71,7 @@ Features
- Basic stats
- Extended stats
- Flow metering, including meter policy API.
+- Flow update. Update of the action list for specific flow
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 1124d5a64c..50cbebc33f 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -164,6 +164,7 @@ New Features
* Added statistics API
* Added age rte flow action support
* Added meter flow metering and flow policy support
+ * Added flow actions update support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 78/86] net/ntnic: migrate to the RTE spinlock
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (76 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 77/86] net/ntnic: update documentation for flow actions update Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 79/86] net/ntnic: remove unnecessary type cast Serhii Iliushyk
` (8 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Migarte form the pthread to rte_spinlock
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 6 +-
drivers/net/ntnic/include/ntdrv_4ga.h | 3 +-
.../net/ntnic/nthw/core/include/nthw_i2cm.h | 4 +-
.../net/ntnic/nthw/core/include/nthw_rpf.h | 5 +-
drivers/net/ntnic/nthw/core/nthw_rpf.c | 3 +-
drivers/net/ntnic/nthw/flow_api/flow_api.c | 43 +++++-----
.../net/ntnic/nthw/flow_api/flow_id_table.c | 20 +++--
.../profile_inline/flow_api_profile_inline.c | 80 +++++++++++--------
.../ntnic/nthw/flow_filter/flow_nthw_flm.c | 47 +++++++++--
drivers/net/ntnic/nthw/nthw_rac.c | 38 ++-------
drivers/net/ntnic/nthw/nthw_rac.h | 2 +-
drivers/net/ntnic/ntnic_ethdev.c | 31 +++----
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 6 +-
13 files changed, 155 insertions(+), 133 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 032063712a..d5382669da 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -6,7 +6,7 @@
#ifndef _FLOW_API_H_
#define _FLOW_API_H_
-#include <pthread.h>
+#include <rte_spinlock.h>
#include "ntlog.h"
@@ -110,13 +110,13 @@ struct flow_nic_dev {
struct flow_handle *flow_base;
/* linked list of all FLM flows created on this NIC */
struct flow_handle *flow_base_flm;
- pthread_mutex_t flow_mtx;
+ rte_spinlock_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
/* linked list of created eth-port devices on this NIC */
struct flow_eth_dev *eth_base;
- pthread_mutex_t mtx;
+ rte_spinlock_t mtx;
/* RSS hashing configuration */
struct nt_eth_rss_conf rss_conf;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 677aa7b6c8..78cf10368a 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -7,6 +7,7 @@
#define __NTDRV_4GA_H__
#include "nt4ga_adapter.h"
+#include <rte_spinlock.h>
typedef struct ntdrv_4ga_s {
uint32_t pciident;
@@ -15,7 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
- pthread_mutex_t stat_lck;
+ rte_spinlock_t stat_lck;
rte_thread_t stat_thread;
rte_thread_t port_event_thread;
} ntdrv_4ga_t;
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h b/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
index 6e0ec4cf5e..eeb4dffe25 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
@@ -7,7 +7,7 @@
#define __NTHW_II2CM_H__
#include "nthw_fpga_model.h"
-#include "pthread.h"
+#include "rte_spinlock.h"
struct nt_i2cm {
nthw_fpga_t *mp_fpga;
@@ -39,7 +39,7 @@ struct nt_i2cm {
nthw_field_t *mp_fld_io_exp_rst;
nthw_field_t *mp_fld_io_exp_int_b;
- pthread_mutex_t i2cmmutex;
+ rte_spinlock_t i2cmmutex;
};
typedef struct nt_i2cm nthw_i2cm_t;
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
index 4c6c57ba55..00b322b2ea 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -7,7 +7,8 @@
#define NTHW_RPF_HPP_
#include "nthw_fpga_model.h"
-#include "pthread.h"
+#include "rte_spinlock.h"
+#include <rte_spinlock.h>
struct nthw_rpf {
nthw_fpga_t *mp_fpga;
@@ -28,7 +29,7 @@ struct nthw_rpf {
int m_default_maturing_delay;
bool m_administrative_block; /* used to enforce license expiry */
- pthread_mutex_t rpf_mutex;
+ rte_spinlock_t rpf_mutex;
};
typedef struct nthw_rpf nthw_rpf_t;
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
index 81c704d01a..1ed4d7b4e0 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rpf.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -8,6 +8,7 @@
#include "nthw_drv.h"
#include "nthw_register.h"
#include "nthw_rpf.h"
+#include "rte_spinlock.h"
nthw_rpf_t *nthw_rpf_new(void)
{
@@ -65,7 +66,7 @@ int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
/* Initialize mutex */
- pthread_mutex_init(&p->rpf_mutex, NULL);
+ rte_spinlock_init(&p->rpf_mutex);
return 0;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 9164f4cc2e..9689aece58 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,7 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -44,7 +45,7 @@ const char *dbg_res_descr[] = {
};
static struct flow_nic_dev *dev_base;
-static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+static rte_spinlock_t base_mtx = RTE_SPINLOCK_INITIALIZER;
/*
* Error handling
@@ -400,7 +401,7 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
#endif
/* delete all created flows from this device */
- pthread_mutex_lock(&ndev->mtx);
+ rte_spinlock_lock(&ndev->mtx);
struct flow_handle *flow = ndev->flow_base;
@@ -454,7 +455,7 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
if (nic_remove_eth_port_dev(ndev, eth_dev) != 0)
NT_LOG(ERR, FILTER, "ERROR : eth_dev %p not found", eth_dev);
- pthread_mutex_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&ndev->mtx);
/* free eth_dev */
free(eth_dev);
@@ -495,15 +496,15 @@ static void done_resource_elements(struct flow_nic_dev *ndev, enum res_type_e re
static void list_insert_flow_nic(struct flow_nic_dev *ndev)
{
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
ndev->next = dev_base;
dev_base = ndev;
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
}
static int list_remove_flow_nic(struct flow_nic_dev *ndev)
{
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
struct flow_nic_dev *nic_dev = dev_base, *prev = NULL;
while (nic_dev) {
@@ -514,7 +515,7 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
else
dev_base = nic_dev->next;
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return 0;
}
@@ -522,7 +523,7 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
nic_dev = nic_dev->next;
}
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return -1;
}
@@ -554,27 +555,27 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
"ERROR: Internal array for multiple queues too small for API");
}
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
if (!ndev) {
/* Error - no flow api found on specified adapter */
NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
adapter_no);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
if (ndev->ports < ((uint16_t)port_no + 1)) {
NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
NT_LOG(ERR, FILTER,
"ERROR: Exceeds supported number of rx queues per eth device");
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
@@ -584,20 +585,19 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
if (eth_dev) {
NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
adapter_no, port_no);
- pthread_mutex_unlock(&base_mtx);
flow_delete_eth_dev(eth_dev);
eth_dev = NULL;
}
+ rte_spinlock_lock(&ndev->mtx);
+
eth_dev = calloc(1, sizeof(struct flow_eth_dev));
if (!eth_dev) {
NT_LOG(ERR, FILTER, "ERROR: calloc failed");
- goto err_exit1;
+ goto err_exit0;
}
- pthread_mutex_lock(&ndev->mtx);
-
eth_dev->ndev = ndev;
eth_dev->port = port_no;
eth_dev->port_id = port_id;
@@ -684,15 +684,14 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
nic_insert_eth_port_dev(ndev, eth_dev);
- pthread_mutex_unlock(&ndev->mtx);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&base_mtx);
return eth_dev;
err_exit0:
- pthread_mutex_unlock(&ndev->mtx);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&base_mtx);
-err_exit1:
if (eth_dev)
free(eth_dev);
@@ -799,7 +798,7 @@ struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_b
for (int i = 0; i < RES_COUNT; i++)
assert(ndev->res[i].alloc_bm);
- pthread_mutex_init(&ndev->mtx, NULL);
+ rte_spinlock_init(&ndev->mtx);
list_insert_flow_nic(ndev);
return ndev;
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index a3f5e1d7f7..a63f5542d1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -3,12 +3,12 @@
* Copyright(c) 2024 Napatech A/S
*/
-#include <pthread.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include "flow_id_table.h"
+#include "rte_spinlock.h"
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
@@ -25,7 +25,7 @@ struct ntnic_id_table_element {
struct ntnic_id_table_data {
struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
- pthread_mutex_t mtx;
+ rte_spinlock_t mtx;
uint32_t next_id;
@@ -68,7 +68,7 @@ void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
- pthread_mutex_init(&handle->mtx, NULL);
+ rte_spinlock_init(&handle->mtx);
handle->next_id = 1;
return handle;
@@ -81,8 +81,6 @@ void ntnic_id_table_destroy(void *id_table)
for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
free(handle->arrays[i]);
- pthread_mutex_destroy(&handle->mtx);
-
free(id_table);
}
@@ -91,7 +89,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
@@ -103,7 +101,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
element->type = type;
memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
return new_id;
}
@@ -112,7 +110,7 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
struct ntnic_id_table_element *current_element =
ntnic_id_table_array_find_element(handle, id);
@@ -127,7 +125,7 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
if (handle->free_tail == 0)
handle->free_tail = handle->free_head;
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
}
void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
@@ -135,7 +133,7 @@ void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h,
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
@@ -143,5 +141,5 @@ void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h,
*type = element->type;
memcpy(flm_h, &element->handle, sizeof(union flm_handles));
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 98bfc01539..9c554ee7e2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include "generic/rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -20,6 +21,7 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_spinlock.h>
#include <rte_common.h>
#define FLM_MTR_PROFILE_SIZE 0x100000
@@ -189,7 +191,7 @@ static int flow_mtr_create_meter(struct flow_eth_dev *dev,
(void)policy_id;
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -238,7 +240,7 @@ static int flow_mtr_create_meter(struct flow_eth_dev *dev,
mtr_stat[mtr_id].flm_id = flm_id;
atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -247,7 +249,7 @@ static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uin
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -278,7 +280,7 @@ static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uin
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -287,7 +289,7 @@ static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, u
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -330,7 +332,7 @@ static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, u
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -340,7 +342,7 @@ static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uin
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -377,7 +379,7 @@ static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uin
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -514,9 +516,9 @@ static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, u
uint8_t port;
bool remote_caller = is_remote_caller(caller_id, &port);
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
((struct flow_handle *)flm_h.p)->learn_ignored = 1;
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
struct flm_status_event_s data = {
.flow = flm_h.p,
.learn_ignore = sta_data->lis,
@@ -813,7 +815,7 @@ static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t p
static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
{
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (ndev->flow_base)
ndev->flow_base->prev = fh;
@@ -822,7 +824,7 @@ static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
fh->prev = NULL;
ndev->flow_base = fh;
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
@@ -830,7 +832,7 @@ static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
struct flow_handle *next = fh->next;
struct flow_handle *prev = fh->prev;
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (next && prev) {
prev->next = next;
@@ -847,12 +849,12 @@ static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
ndev->flow_base = NULL;
}
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
{
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (ndev->flow_base_flm)
ndev->flow_base_flm->prev = fh;
@@ -861,7 +863,7 @@ static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
fh->prev = NULL;
ndev->flow_base_flm = fh;
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
@@ -869,7 +871,7 @@ static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
struct flow_handle *next = fh_flm->next;
struct flow_handle *prev = fh_flm->prev;
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (next && prev) {
prev->next = next;
@@ -886,7 +888,7 @@ static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
ndev->flow_base_flm = NULL;
}
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
@@ -4192,20 +4194,20 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
struct nic_flow_def *fd = allocate_nic_flow_def();
if (fd == NULL)
- goto err_exit;
+ goto err_exit0;
res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
if (res)
- goto err_exit;
+ goto err_exit0;
res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
packet_data, packet_mask, &key_def);
if (res)
- goto err_exit;
+ goto err_exit0;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
/* Translate group IDs */
if (fd->jump_to_group != UINT32_MAX &&
@@ -4239,19 +4241,27 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
dev, dev->ndev->adapter_no, dev->port, fh, fd);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return fh;
err_exit:
- if (fh)
+ if (fh) {
flow_destroy_locked_profile_inline(dev, fh, NULL);
-
- else
+ fh = NULL;
+ } else {
free(fd);
+ fd = NULL;
+ }
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
- pthread_mutex_unlock(&dev->ndev->mtx);
+err_exit0:
+ if (fd) {
+ free(fd);
+ fd = NULL;
+ }
NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
@@ -4312,6 +4322,7 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
+ fh->fd = NULL;
}
if (err) {
@@ -4320,6 +4331,7 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
}
free(fh);
+ fh = NULL;
#ifdef FLOW_DEBUG
dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
@@ -4337,9 +4349,9 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
if (flow) {
/* Delete this flow */
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
err = flow_destroy_locked_profile_inline(dev, flow, error);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
}
return err;
@@ -4427,7 +4439,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
return -1;
}
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
/* Setup new actions */
uint32_t local_idx_counter = 0;
@@ -4534,7 +4546,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
flow->flm_db_idxs[i] = local_idxs[i];
}
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
free(fd);
return 0;
@@ -4543,7 +4555,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle, (struct hw_db_idx *)local_idxs,
local_idx_counter);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
free(fd);
return -1;
@@ -5280,7 +5292,7 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
{
flow_nic_set_error(ERR_SUCCESS, error);
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
if (flow != NULL) {
if (flow->type == FLOW_HANDLE_TYPE_FLM) {
@@ -5339,7 +5351,7 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
}
}
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
diff --git a/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c b/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
index 6f3b381a17..8855978349 100644
--- a/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
+++ b/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
@@ -678,11 +678,13 @@ int flm_nthw_buf_ctrl_update(const struct flm_nthw *p, uint32_t *lrn_free, uint3
uint32_t address_bufctrl = nthw_register_get_address(p->mp_buf_ctrl);
nthw_rab_bus_id_t bus_id = 1;
struct dma_buf_ptr bc_buf;
- ret = nthw_rac_rab_dma_begin(rac);
+ rte_spinlock_lock(&rac->m_mutex);
+ ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
if (ret == 0) {
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- ret = nthw_rac_rab_dma_commit(rac);
+ ret = rac->m_dma_active ? nthw_rac_rab_dma_commit(rac) : (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
if (ret != 0)
return ret;
@@ -692,6 +694,13 @@ int flm_nthw_buf_ctrl_update(const struct flm_nthw *p, uint32_t *lrn_free, uint3
*lrn_free = bc_buf.base[bc_index & bc_mask] & 0xffff;
*inf_avail = (bc_buf.base[bc_index & bc_mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(bc_index + 1) & bc_mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
return ret;
@@ -716,8 +725,10 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
*handled_records = 0;
int max_tries = 10000;
- while (*inf_avail == 0 && *sta_avail == 0 && records != 0 && --max_tries > 0)
- if (nthw_rac_rab_dma_begin(rac) == 0) {
+ while (*inf_avail == 0 && *sta_avail == 0 && records != 0 && --max_tries > 0) {
+ rte_spinlock_lock(&rac->m_mutex);
+ int ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
+ if (ret == 0) {
uint32_t dma_free = nthw_rac_rab_get_free(rac);
if (dma_free != RAB_DMA_BUF_CNT) {
@@ -770,7 +781,11 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
/* Read buf ctrl */
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- if (nthw_rac_rab_dma_commit(rac) != 0)
+ int ret = rac->m_dma_active ?
+ nthw_rac_rab_dma_commit(rac) :
+ (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
+ if (ret != 0)
return -1;
uint32_t bc_mask = bc_buf.size - 1;
@@ -778,8 +793,15 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
*lrn_free = bc_buf.base[bc_index & bc_mask] & 0xffff;
*inf_avail = (bc_buf.base[bc_index & bc_mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(bc_index + 1) & bc_mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
-
+ }
return 0;
}
@@ -801,7 +823,8 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
uint32_t mask;
uint32_t index;
- ret = nthw_rac_rab_dma_begin(rac);
+ rte_spinlock_lock(&rac->m_mutex);
+ ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
if (ret == 0) {
/* Announce the number of words to read from INF_DATA */
@@ -821,7 +844,8 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
}
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- ret = nthw_rac_rab_dma_commit(rac);
+ ret = rac->m_dma_active ? nthw_rac_rab_dma_commit(rac) : (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
if (ret != 0)
return ret;
@@ -847,6 +871,13 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
*lrn_free = bc_buf.base[index & mask] & 0xffff;
*inf_avail = (bc_buf.base[index & mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(index + 1) & mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
return ret;
diff --git a/drivers/net/ntnic/nthw/nthw_rac.c b/drivers/net/ntnic/nthw/nthw_rac.c
index 461da8e104..ca6aba6db2 100644
--- a/drivers/net/ntnic/nthw/nthw_rac.c
+++ b/drivers/net/ntnic/nthw/nthw_rac.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include "rte_spinlock.h"
#include "nt_util.h"
#include "ntlog.h"
@@ -10,8 +11,6 @@
#include "nthw_register.h"
#include "nthw_rac.h"
-#include <pthread.h>
-
#define RAB_DMA_WAIT (1000000)
#define RAB_READ (0x01)
@@ -217,7 +216,7 @@ int nthw_rac_init(nthw_rac_t *p, nthw_fpga_t *p_fpga, struct fpga_info_s *p_fpga
}
}
- pthread_mutex_init(&p->m_mutex, NULL);
+ rte_spinlock_init(&p->m_mutex);
return 0;
}
@@ -389,19 +388,6 @@ void nthw_rac_bar0_write32(const struct fpga_info_s *p_fpga_info, uint32_t reg_a
int nthw_rac_rab_dma_begin(nthw_rac_t *p)
{
- const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
- const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
-
- pthread_mutex_lock(&p->m_mutex);
-
- if (p->m_dma_active) {
- pthread_mutex_unlock(&p->m_mutex);
- NT_LOG(ERR, NTHW,
- "%s: DMA begin requested, but a DMA transaction is already active",
- p_adapter_id_str);
- return -1;
- }
-
p->m_dma_active = true;
return 0;
@@ -454,19 +440,11 @@ int nthw_rac_rab_dma_commit(nthw_rac_t *p)
{
int ret;
- if (!p->m_dma_active) {
- /* Expecting mutex not to be locked! */
- assert(0); /* alert developer that something is wrong */
- return -1;
- }
-
nthw_rac_rab_dma_activate(p);
ret = nthw_rac_rab_dma_wait(p);
p->m_dma_active = false;
- pthread_mutex_unlock(&p->m_mutex);
-
return ret;
}
@@ -602,7 +580,7 @@ int nthw_rac_rab_write32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint
return -1;
}
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
if (p->m_dma_active) {
NT_LOG(ERR, NTHW, "%s: RAB: Illegal operation: DMA enabled", p_adapter_id_str);
@@ -748,7 +726,7 @@ int nthw_rac_rab_write32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint
}
exit_unlock_res:
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
@@ -763,7 +741,7 @@ int nthw_rac_rab_read32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint3
uint32_t out_buf_free;
int res = 0;
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
if (address > (1 << RAB_ADDR_BW)) {
NT_LOG(ERR, NTHW, "%s: RAB: Illegal address: value too large %d - max %d",
@@ -923,7 +901,7 @@ int nthw_rac_rab_read32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint3
}
exit_unlock_res:
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
@@ -935,7 +913,7 @@ int nthw_rac_rab_flush(nthw_rac_t *p)
uint32_t retry;
int res = 0;
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
/* Set the flush bit */
nthw_rac_reg_write32(p_fpga_info, p->RAC_RAB_BUF_USED_ADDR,
@@ -960,6 +938,6 @@ int nthw_rac_rab_flush(nthw_rac_t *p)
/* Clear flush bit when done */
nthw_rac_reg_write32(p_fpga_info, p->RAC_RAB_BUF_USED_ADDR, 0x0);
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
diff --git a/drivers/net/ntnic/nthw/nthw_rac.h b/drivers/net/ntnic/nthw/nthw_rac.h
index c64dac9da9..df92b487af 100644
--- a/drivers/net/ntnic/nthw/nthw_rac.h
+++ b/drivers/net/ntnic/nthw/nthw_rac.h
@@ -16,7 +16,7 @@ struct nthw_rac {
nthw_fpga_t *mp_fpga;
nthw_module_t *mp_mod_rac;
- pthread_mutex_t m_mutex;
+ rte_spinlock_t m_mutex;
int mn_param_rac_rab_interfaces;
int mn_param_rac_rab_ob_update;
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index df9ee77e06..91669caceb 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -18,6 +18,7 @@
#include <sys/queue.h>
+#include "rte_spinlock.h"
#include "ntlog.h"
#include "ntdrv_4ga.h"
#include "ntos_drv.h"
@@ -236,7 +237,7 @@ static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s
if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
return -1;
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
/* Rx */
for (i = 0; i < internals->nb_rx_queues; i++) {
@@ -256,7 +257,7 @@ static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s
p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
@@ -1519,9 +1520,9 @@ static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *sta
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return nb_xstats;
}
@@ -1544,10 +1545,10 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nb_xstats =
ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return nb_xstats;
}
@@ -1566,9 +1567,9 @@ static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return dpdk_stats_reset(internals, p_nt_drv, if_index);
}
@@ -1749,14 +1750,14 @@ THREAD_FUNC port_event_thread_fn(void *context)
if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
flmdata.lookup_maximum =
p_nt4ga_stat->mp_stat_structs_flm->max_lps;
flmdata.access_maximum =
p_nt4ga_stat->mp_stat_structs_flm->max_aps;
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
rte_eth_dev_callback_process(eth_dev,
@@ -1773,7 +1774,7 @@ THREAD_FUNC port_event_thread_fn(void *context)
if (p_nt4ga_stat->mp_port_load) {
if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
@@ -1786,7 +1787,7 @@ THREAD_FUNC port_event_thread_fn(void *context)
p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
portdata.tx_bps_maximum =
p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
rte_eth_dev_callback_process(eth_dev,
@@ -1957,9 +1958,9 @@ THREAD_FUNC adapter_stat_thread_fn(void *context)
/* Check then collect */
{
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
}
}
@@ -2232,7 +2233,7 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
- pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ rte_spinlock_init(&p_nt_drv->stat_lck);
res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
(void *)p_drv);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 8edaccb65c..4c18088681 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -910,7 +910,7 @@ static int poll_statistics(struct pmd_internals *internals)
internals->last_stat_rtc = now_rtc;
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
/*
* Add the RX statistics increments since last time we polled.
@@ -951,7 +951,7 @@ static int poll_statistics(struct pmd_internals *internals)
/* Globally only once a second */
if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
rte_spinlock_unlock(&hwlock);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
@@ -988,7 +988,7 @@ static int poll_statistics(struct pmd_internals *internals)
}
rte_spinlock_unlock(&hwlock);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 79/86] net/ntnic: remove unnecessary type cast
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (77 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 78/86] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 80/86] net/ntnic: add async create/destroy API declaration Serhii Iliushyk
` (7 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The dev_private has type void * and type casting is not necessary.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 18 +++----
drivers/net/ntnic/ntnic_ethdev.c | 48 +++++++++----------
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 16 +++----
3 files changed, 41 insertions(+), 41 deletions(-)
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
index e4e8fe0c7d..33593927a4 100644
--- a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -42,7 +42,7 @@ static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -110,7 +110,7 @@ static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
@@ -161,7 +161,7 @@ static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
@@ -184,7 +184,7 @@ static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
@@ -250,7 +250,7 @@ static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -316,7 +316,7 @@ static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -348,7 +348,7 @@ static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
const uint64_t adjust_bit = 1ULL << 63;
const uint64_t probe_bit = 1ULL << 62;
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
if (mtr_id >=
@@ -409,7 +409,7 @@ static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -445,7 +445,7 @@ static const struct rte_mtr_ops mtr_ops_inline = {
static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 91669caceb..068c3d932a 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -272,7 +272,7 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -302,14 +302,14 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
dpdk_stats_collect(internals, stats);
return 0;
}
static int eth_stats_reset(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
const int if_index = internals->n_intf_no;
@@ -327,7 +327,7 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -957,14 +957,14 @@ static int deallocate_hw_virtio_queues(struct hwq_s *hwq)
static void eth_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_tx_queue *tx_q = &internals->txq_scg[queue_id];
deallocate_hw_virtio_queues(&tx_q->hwq);
}
static void eth_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_rx_queue *rx_q = &internals->rxq_scg[queue_id];
deallocate_hw_virtio_queues(&rx_q->hwq);
}
@@ -994,7 +994,7 @@ static int eth_rx_scg_queue_setup(struct rte_eth_dev *eth_dev,
{
NT_LOG_DBGX(DBG, NTNIC, "Rx queue setup");
struct rte_pktmbuf_pool_private *mbp_priv;
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_rx_queue *rx_q = &internals->rxq_scg[rx_queue_id];
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
@@ -1062,7 +1062,7 @@ static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev,
}
NT_LOG_DBGX(DBG, NTNIC, "Tx queue setup");
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
struct ntnic_tx_queue *tx_q = &internals->txq_scg[tx_queue_id];
@@ -1185,7 +1185,7 @@ eth_mac_addr_add(struct rte_eth_dev *eth_dev,
if (index >= NUM_MAC_ADDRS_PER_PORT) {
const struct pmd_internals *const internals =
- (struct pmd_internals *)eth_dev->data->dev_private;
+ eth_dev->data->dev_private;
NT_LOG_DBGX(DBG, NTNIC, "Port %i: illegal index %u (>= %u)",
internals->n_intf_no, index, NUM_MAC_ADDRS_PER_PORT);
return -1;
@@ -1211,7 +1211,7 @@ eth_set_mc_addr_list(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
{
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct rte_ether_addr *const mc_addrs = internals->mc_addrs;
size_t i;
@@ -1252,7 +1252,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -1313,7 +1313,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
static int
eth_dev_stop(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
NT_LOG_DBGX(DBG, NTNIC, "Port %u", internals->n_intf_no);
@@ -1341,7 +1341,7 @@ eth_dev_set_link_up(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
const int port = internals->n_intf_no;
@@ -1367,7 +1367,7 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
const int port = internals->n_intf_no;
@@ -1440,7 +1440,7 @@ drv_deinit(struct drv_s *p_drv)
static int
eth_dev_close(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
if (internals->type != PORT_TYPE_VIRTUAL) {
@@ -1478,7 +1478,7 @@ eth_dev_close(struct rte_eth_dev *eth_dev)
static int
eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (internals->type == PORT_TYPE_VIRTUAL || internals->type == PORT_TYPE_OVERRIDE)
return 0;
@@ -1506,7 +1506,7 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1531,7 +1531,7 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
uint64_t *values,
unsigned int n)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1554,7 +1554,7 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1576,7 +1576,7 @@ static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names, unsigned int size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1596,7 +1596,7 @@ static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names,
unsigned int size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1627,7 +1627,7 @@ static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_r
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct flow_nic_dev *ndev = internals->flw_dev->ndev;
struct nt_eth_rss_conf tmp_rss_conf = { 0 };
@@ -1662,7 +1662,7 @@ static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_r
static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct flow_nic_dev *ndev = internals->flw_dev->ndev;
rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
@@ -1723,7 +1723,7 @@ static struct eth_dev_ops nthw_eth_dev_ops = {
*/
THREAD_FUNC port_event_thread_fn(void *context)
{
- struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct pmd_internals *internals = context;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 4c18088681..0e20606a41 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -491,7 +491,7 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
struct cnv_action_s *action,
struct rte_flow_error *error)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
static struct rte_flow_error flow_error = {
@@ -554,7 +554,7 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -595,7 +595,7 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return NULL;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -673,7 +673,7 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -716,7 +716,7 @@ static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
.message = "none" };
@@ -724,7 +724,7 @@ static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
if (internals->flw_dev) {
struct pmd_internals *dev_private =
- (struct pmd_internals *)eth_dev->data->dev_private;
+ eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &dev_private->p_drv->ntdrv.adapter_info.fpga_info;
struct cnv_action_s action = { 0 };
@@ -780,7 +780,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -808,7 +808,7 @@ static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 80/86] net/ntnic: add async create/destroy API declaration
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (78 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 79/86] net/ntnic: remove unnecessary type cast Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 81/86] net/ntnic: add async template " Serhii Iliushyk
` (6 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Fast path async create and destroy flow API implementation were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 8 ++
drivers/net/ntnic/ntnic_ethdev.c | 1 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 105 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +++
drivers/net/ntnic/ntnic_mod_reg.h | 18 +++
5 files changed, 147 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b40a27fbf1..505fb8e501 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -343,6 +343,14 @@ struct flow_handle {
};
};
+struct flow_pattern_template {
+};
+
+struct flow_actions_template {
+};
+struct flow_template_table {
+};
+
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 068c3d932a..77436eb02d 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1252,6 +1252,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
return -1;
}
+ eth_dev->flow_fp_ops = get_dev_fp_flow_ops();
struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 0e20606a41..d1f3ed4831 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,6 +4,11 @@
*/
#include <rte_flow_driver.h>
+#include <rte_pci.h>
+#include <rte_version.h>
+#include <rte_flow.h>
+
+#include "ntlog.h"
#include "nt_util.h"
#include "create_elements.h"
#include "ntnic_mod_reg.h"
@@ -881,6 +886,96 @@ static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_por
return res;
}
+static struct rte_flow *eth_flow_async_create(struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_action_s action = { 0 };
+ struct cnv_match_s match = { 0 };
+
+ if (create_match_elements(&match, pattern, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in pattern");
+ return NULL;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0)
+ queue_offset = internals->vpq[0].id;
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS, queue_offset) !=
+ 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return NULL;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return NULL;
+ }
+
+ struct flow_handle *res =
+ flow_filter_ops->flow_async_create(internals->flw_dev,
+ queue_id,
+ (const struct rte_flow_op_attr *)op_attr,
+ (struct flow_template_table *)template_table,
+ match.rte_flow_item,
+ pattern_template_index,
+ action.flow_actions,
+ actions_template_index,
+ user_data,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow *)res;
+}
+
+static int eth_flow_async_destroy(struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct rte_flow *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_async_destroy(internals->flw_dev,
+ queue_id,
+ (const struct rte_flow_op_attr *)op_attr,
+ (struct flow_handle *)flow,
+ user_data,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -1017,3 +1112,13 @@ void dev_flow_init(void)
{
register_dev_flow_ops(&dev_flow_ops);
}
+
+static struct rte_flow_fp_ops async_dev_flow_ops = {
+ .async_create = eth_flow_async_create,
+ .async_destroy = eth_flow_async_destroy,
+};
+
+void dev_fp_flow_init(void)
+{
+ register_dev_fp_flow_ops(&async_dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 10aa778a57..658fac72c0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -199,6 +199,21 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+static const struct rte_flow_fp_ops *dev_fp_flow_ops;
+
+void register_dev_fp_flow_ops(const struct rte_flow_fp_ops *ops)
+{
+ dev_fp_flow_ops = ops;
+}
+
+const struct rte_flow_fp_ops *get_dev_fp_flow_ops(void)
+{
+ if (dev_fp_flow_ops == NULL)
+ dev_fp_flow_init();
+
+ return dev_fp_flow_ops;
+}
+
static const struct rte_flow_ops *dev_flow_ops;
void register_dev_flow_ops(const struct rte_flow_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 563e62ebce..572da11d02 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,7 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+#include <rte_flow.h>
#include "rte_ethdev.h"
#include "rte_flow_driver.h"
@@ -426,6 +427,19 @@ struct flow_filter_ops {
uint32_t nb_contexts,
struct rte_flow_error *error);
+ /*
+ * RTE flow asynchronous operations functions
+ */
+ struct flow_handle *(*flow_async_create)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error);
+
+ int (*flow_async_destroy)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error);
+
int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error);
@@ -436,6 +450,10 @@ struct flow_filter_ops {
struct rte_flow_error *error);
};
+void register_dev_fp_flow_ops(const struct rte_flow_fp_ops *ops);
+const struct rte_flow_fp_ops *get_dev_fp_flow_ops(void);
+void dev_fp_flow_init(void);
+
void register_dev_flow_ops(const struct rte_flow_ops *ops);
const struct rte_flow_ops *get_dev_flow_ops(void);
void dev_flow_init(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 81/86] net/ntnic: add async template API declaration
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (79 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 80/86] net/ntnic: add async create/destroy API declaration Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 82/86] net/ntnic: add async flow create/delete API implementation Serhii Iliushyk
` (5 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
rte_flow_ops was exnteded with next features support
1. flow pattern template create
2. flow pattern template destroy
3. flow actions template create
4. flow actions template destroy
5. flow template table create
6. flow template table destroy
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 224 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 28 +++
2 files changed, 252 insertions(+)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index d1f3ed4831..06b6ae442b 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -886,6 +886,224 @@ static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_por
return res;
}
+static struct rte_flow_pattern_template *eth_flow_pattern_template_create(struct rte_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_match_s match = { 0 };
+ struct rte_flow_pattern_template_attr attr = {
+ .relaxed_matching = template_attr->relaxed_matching,
+ .ingress = template_attr->ingress,
+ .egress = template_attr->egress,
+ .transfer = template_attr->transfer,
+ };
+
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ if (create_match_elements(&match, pattern, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in pattern");
+ return NULL;
+ }
+
+ struct flow_pattern_template *res =
+ flow_filter_ops->flow_pattern_template_create(internals->flw_dev, &attr, caller_id,
+ match.rte_flow_item, &flow_error);
+
+ convert_error(error, &flow_error);
+ return (struct rte_flow_pattern_template *)res;
+}
+
+static int eth_flow_pattern_template_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_pattern_template_destroy(internals->flw_dev,
+ (struct flow_pattern_template *)
+ pattern_template,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
+static struct rte_flow_actions_template *eth_flow_actions_template_create(struct rte_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_action_s action = { 0 };
+ struct cnv_action_s mask = { 0 };
+ struct rte_flow_actions_template_attr attr = {
+ .ingress = template_attr->ingress,
+ .egress = template_attr->egress,
+ .transfer = template_attr->transfer,
+ };
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0)
+ queue_offset = internals->vpq[0].id;
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS, queue_offset) !=
+ 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return NULL;
+ }
+
+ if (create_action_elements_inline(&mask, masks, MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in masks");
+ return NULL;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return NULL;
+ }
+
+ struct flow_actions_template *res =
+ flow_filter_ops->flow_actions_template_create(internals->flw_dev, &attr, caller_id,
+ action.flow_actions,
+ mask.flow_actions, &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow_actions_template *)res;
+}
+
+static int eth_flow_actions_template_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_actions_template_destroy(internals->flw_dev,
+ (struct flow_actions_template *)
+ actions_template,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
+static struct rte_flow_template_table *eth_flow_template_table_create(struct rte_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr,
+ struct rte_flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct rte_flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct rte_flow_template_table_attr attr = {
+ .flow_attr = {
+ .group = table_attr->flow_attr.group,
+ .priority = table_attr->flow_attr.priority,
+ .ingress = table_attr->flow_attr.ingress,
+ .egress = table_attr->flow_attr.egress,
+ .transfer = table_attr->flow_attr.transfer,
+ },
+ .nb_flows = table_attr->nb_flows,
+ };
+ uint16_t forced_vlan_vid = 0;
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ struct flow_template_table *res =
+ flow_filter_ops->flow_template_table_create(internals->flw_dev, &attr,
+ forced_vlan_vid, caller_id,
+ (struct flow_pattern_template **)pattern_templates,
+ nb_pattern_templates, (struct flow_actions_template **)actions_templates,
+ nb_actions_templates, &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow_template_table *)res;
+}
+
+static int eth_flow_template_table_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_template_table_destroy(internals->flw_dev,
+ (struct flow_template_table *)
+ template_table,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
static struct rte_flow *eth_flow_async_create(struct rte_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct rte_flow_template_table *template_table, const struct rte_flow_item pattern[],
@@ -1106,6 +1324,12 @@ static const struct rte_flow_ops dev_flow_ops = {
.get_aged_flows = eth_flow_get_aged_flows,
.info_get = eth_flow_info_get,
.configure = eth_flow_configure,
+ .pattern_template_create = eth_flow_pattern_template_create,
+ .pattern_template_destroy = eth_flow_pattern_template_destroy,
+ .actions_template_create = eth_flow_actions_template_create,
+ .actions_template_destroy = eth_flow_actions_template_destroy,
+ .template_table_create = eth_flow_template_table_create,
+ .template_table_destroy = eth_flow_template_table_destroy,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 572da11d02..92856b81d5 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -430,6 +430,34 @@ struct flow_filter_ops {
/*
* RTE flow asynchronous operations functions
*/
+ struct flow_pattern_template *(*flow_pattern_template_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+ int (*flow_pattern_template_destroy)(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+ struct flow_actions_template *(*flow_actions_template_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error);
+
+ int (*flow_actions_template_destroy)(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+ struct flow_template_table *(*flow_template_table_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+ int (*flow_template_table_destroy)(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *(*flow_async_create)(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 82/86] net/ntnic: add async flow create/delete API implementation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (80 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 81/86] net/ntnic: add async template " Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 83/86] net/ntnic: add async template APIs implementation Serhii Iliushyk
` (4 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with async flow create and delete features
implementation.
async & destroy API was added to the flow filter ops.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 36 +++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++
.../profile_inline/flow_api_hw_db_inline.c | 13 +
.../profile_inline/flow_api_hw_db_inline.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 248 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 14 +
drivers/net/ntnic/ntnic_mod_reg.h | 15 ++
7 files changed, 366 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 505fb8e501..6935ff483a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -339,6 +339,12 @@ struct flow_handle {
uint8_t flm_rqi;
uint8_t flm_qfi;
uint8_t flm_scrub_prof;
+
+ /* Flow specific pointer to application template table cell stored during
+ * flow create.
+ */
+ struct flow_template_table_cell *template_table_cell;
+ bool flm_async;
};
};
};
@@ -347,8 +353,38 @@ struct flow_pattern_template {
};
struct flow_actions_template {
+ struct nic_flow_def *fd;
+
+ uint32_t num_dest_port;
+ uint32_t num_queues;
};
+
+struct flow_template_table_cell {
+ atomic_int status;
+ atomic_int counter;
+
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_key_id;
+ uint32_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint8_t flm_scrub_prof;
+};
+
struct flow_template_table {
+ struct flow_pattern_template **pattern_templates;
+ uint8_t nb_pattern_templates;
+
+ struct flow_actions_template **actions_templates;
+ uint8_t nb_actions_templates;
+
+ struct flow_template_table_cell *pattern_action_pairs;
+
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
};
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 9689aece58..420f081178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1115,6 +1115,43 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
nb_queue, queue_attr, error);
}
+/*
+ * Flow Asynchronous operation API
+ */
+
+static struct flow_handle *
+flow_async_create(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_template_table *template_table,
+ const struct rte_flow_item pattern[], uint8_t pattern_template_index,
+ const struct rte_flow_action actions[], uint8_t actions_template_index, void *user_data,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_async_create_profile_inline(dev, queue_id, op_attr,
+ template_table, pattern, pattern_template_index, actions,
+ actions_template_index, user_data, error);
+}
+
+static int flow_async_destroy(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_async_destroy_profile_inline(dev, queue_id, op_attr, flow,
+ user_data, error);
+}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1151,6 +1188,8 @@ static const struct flow_filter_ops ops = {
*/
.flow_info_get = flow_info_get,
.flow_configure = flow_configure,
+ .flow_async_create = flow_async_create,
+ .flow_async_destroy = flow_async_destroy,
/*
* Other
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 2fee6ae6b5..ffab643f56 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -393,6 +393,19 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+struct hw_db_idx *hw_db_inline_find_idx(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ (void)db_handle;
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type == type)
+ return &idxs[i];
+ }
+
+ return NULL;
+}
+
void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
uint32_t size, FILE *file)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c920d36cfd..aa046b68a7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -287,6 +287,8 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+struct hw_db_idx *hw_db_inline_find_idx(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
uint32_t size, FILE *file);
void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9c554ee7e2..d97206614b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3,7 +3,6 @@
* Copyright(c) 2023 Napatech A/S
*/
-#include "generic/rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -64,6 +63,11 @@
#define POLICING_PARAMETER_OFFSET 4096
#define SIZE_CONVERTER 1099.511627776
+#define CELL_STATUS_UNINITIALIZED 0
+#define CELL_STATUS_INITIALIZING 1
+#define CELL_STATUS_INITIALIZED_TYPE_FLOW 2
+#define CELL_STATUS_INITIALIZED_TYPE_FLM 3
+
struct flm_mtr_stat_s {
struct dual_buckets_s *buckets;
atomic_uint_fast64_t n_pkt;
@@ -1034,6 +1038,17 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
return 0;
}
+static inline const void *memcpy_or(void *dest, const void *src, size_t count)
+{
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] |= src_ptr[i];
+
+ return dest;
+}
+
/*
* This function must be callable without locking any mutexes
*/
@@ -4345,6 +4360,9 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
{
int err = 0;
+ if (flow && flow->type == FLOW_HANDLE_TYPE_FLM && flow->flm_async)
+ return flow_async_destroy_profile_inline(dev, 0, NULL, flow, NULL, error);
+
flow_nic_set_error(ERR_SUCCESS, error);
if (flow) {
@@ -5489,6 +5507,232 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
+struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev,
+ uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ const struct rte_flow_action actions[],
+ uint8_t actions_template_index,
+ void *user_data,
+ struct rte_flow_error *error)
+{
+ (void)queue_id;
+ (void)op_attr;
+ struct flow_handle *fh = NULL;
+ int res, status;
+
+ const uint32_t pattern_action_index =
+ (uint32_t)template_table->nb_actions_templates * pattern_template_index +
+ actions_template_index;
+ struct flow_template_table_cell *pattern_action_pair =
+ &template_table->pattern_action_pairs[pattern_action_index];
+
+ uint32_t num_dest_port =
+ template_table->actions_templates[actions_template_index]->num_dest_port;
+ uint32_t num_queues =
+ template_table->actions_templates[actions_template_index]->num_queues;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = malloc(sizeof(struct nic_flow_def));
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ goto err_exit;
+ }
+
+ memcpy(fd, template_table->actions_templates[actions_template_index]->fd,
+ sizeof(struct nic_flow_def));
+
+ res = interpret_flow_elements(dev, pattern, fd, error,
+ template_table->forced_vlan_vid, &port_id, packet_data,
+ packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ {
+ uint32_t num_dest_port_tmp = 0;
+ uint32_t num_queues_tmp = 0;
+
+ struct nic_flow_def action_fd = { 0 };
+ prepare_nic_flow_def(&action_fd);
+
+ res = interpret_flow_actions(dev, actions, NULL, &action_fd, error,
+ &num_dest_port_tmp, &num_queues_tmp);
+
+ if (res)
+ goto err_exit;
+
+ /* Copy FLM unique actions: modify_field, meter, encap/decap and age */
+ memcpy_or(fd->mtr_ids, action_fd.mtr_ids, sizeof(action_fd.mtr_ids));
+ memcpy_or(&fd->tun_hdr, &action_fd.tun_hdr, sizeof(struct tunnel_header_s));
+ memcpy_or(fd->modify_field, action_fd.modify_field,
+ sizeof(action_fd.modify_field));
+ fd->modify_field_count = action_fd.modify_field_count;
+ memcpy_or(&fd->age, &action_fd.age, sizeof(struct rte_flow_action_age));
+ }
+
+ status = atomic_load(&pattern_action_pair->status);
+
+ /* Initializing template entry */
+ if (status < CELL_STATUS_INITIALIZED_TYPE_FLOW) {
+ if (status == CELL_STATUS_UNINITIALIZED &&
+ atomic_compare_exchange_strong(&pattern_action_pair->status, &status,
+ CELL_STATUS_INITIALIZING)) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ fh = create_flow_filter(dev, fd, &template_table->attr,
+ template_table->forced_vlan_vid, template_table->caller_id,
+ error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (fh == NULL) {
+ /* reset status to CELL_STATUS_UNINITIALIZED to avoid a deadlock */
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_UNINITIALIZED);
+ goto err_exit;
+ }
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ struct hw_db_idx *flm_ft_idx =
+ hw_db_inline_find_idx(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_FLM_FT,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ pattern_action_pair->flm_db_idx_counter = fh->flm_db_idx_counter;
+ memcpy(pattern_action_pair->flm_db_idxs, fh->flm_db_idxs,
+ sizeof(struct hw_db_idx) * fh->flm_db_idx_counter);
+
+ pattern_action_pair->flm_key_id = fh->flm_kid;
+ pattern_action_pair->flm_ft = flm_ft_idx->id1;
+
+ pattern_action_pair->flm_rpl_ext_ptr = fh->flm_rpl_ext_ptr;
+ pattern_action_pair->flm_scrub_prof = fh->flm_scrub_prof;
+
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_INITIALIZED_TYPE_FLM);
+
+ /* increment template table cell reference */
+ atomic_fetch_add(&pattern_action_pair->counter, 1);
+ fh->template_table_cell = pattern_action_pair;
+ fh->flm_async = true;
+
+ } else {
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_INITIALIZED_TYPE_FLOW);
+ }
+
+ } else {
+ do {
+ nt_os_wait_usec(1);
+ status = atomic_load(&pattern_action_pair->status);
+ } while (status == CELL_STATUS_INITIALIZING);
+
+ /* error handling in case that create_flow_filter() will fail in the other
+ * thread
+ */
+ if (status == CELL_STATUS_UNINITIALIZED)
+ goto err_exit;
+ }
+ }
+
+ /* FLM learn */
+ if (fh == NULL && status == CELL_STATUS_INITIALIZED_TYPE_FLM) {
+ fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->dev = dev;
+ fh->caller_id = template_table->caller_id;
+ fh->user_data = user_data;
+
+ copy_fd_to_fh_flm(fh, fd, packet_data, pattern_action_pair->flm_key_id,
+ pattern_action_pair->flm_ft,
+ pattern_action_pair->flm_rpl_ext_ptr,
+ pattern_action_pair->flm_scrub_prof,
+ template_table->attr.priority & 0x3);
+
+ free(fd);
+
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ /* increment template table cell reference */
+ atomic_fetch_add(&pattern_action_pair->counter, 1);
+ fh->template_table_cell = pattern_action_pair;
+ fh->flm_async = true;
+
+ } else if (fh == NULL) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ fh = create_flow_filter(dev, fd, &template_table->attr,
+ template_table->forced_vlan_vid, template_table->caller_id,
+ error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (fh == NULL)
+ goto err_exit;
+ }
+
+ if (fh) {
+ fh->caller_id = template_table->caller_id;
+ fh->user_data = user_data;
+ }
+
+ return fh;
+
+err_exit:
+ free(fd);
+ free(fh);
+
+ return NULL;
+}
+
+int flow_async_destroy_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ (void)queue_id;
+ (void)op_attr;
+ (void)user_data;
+
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW)
+ return flow_destroy_profile_inline(dev, flow, error);
+
+ if (flm_flow_programming(flow, NT_FLM_OP_UNLEARN)) {
+ NT_LOG(ERR, FILTER, "FAILED to destroy flow: %p", flow);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ return -1;
+ }
+
+ nic_remove_flow_flm(dev->ndev, flow);
+
+ free(flow);
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -5513,6 +5757,8 @@ static const struct profile_inline_ops ops = {
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
.flow_info_get_profile_inline = flow_info_get_profile_inline,
.flow_configure_profile_inline = flow_configure_profile_inline,
+ .flow_async_create_profile_inline = flow_async_create_profile_inline,
+ .flow_async_destroy_profile_inline = flow_async_destroy_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 8a03be1ab7..b548142342 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -69,6 +69,20 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+/*
+ * RTE flow asynchronous operations functions
+ */
+
+struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error);
+
+int flow_async_destroy_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error);
+
int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info,
struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 92856b81d5..e8e7090661 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -310,6 +310,21 @@ struct profile_inline_ops {
uint32_t nb_contexts,
struct rte_flow_error *error);
+ /*
+ * RTE flow asynchronous operations functions
+ */
+
+ struct flow_handle *(*flow_async_create_profile_inline)(struct flow_eth_dev *dev,
+ uint32_t queue_id, const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t rte_pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t rte_actions_template_index, void *user_data, struct rte_flow_error *error);
+
+ int (*flow_async_destroy_profile_inline)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_handle *flow, void *user_data,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 83/86] net/ntnic: add async template APIs implementation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (81 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 82/86] net/ntnic: add async flow create/delete API implementation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 84/86] net/ntnic: update async flow API documentation Serhii Iliushyk
` (3 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flow filter ops and inline API was exnteded with next APIs:
1. flow pattern template create
2. flow pattern template destroy
3. flow actions template create
4. flow actions template destroy
5. flow template table create
6. flow template table destroy
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 104 ++++++++
.../profile_inline/flow_api_profile_inline.c | 225 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 28 +++
drivers/net/ntnic/ntnic_mod_reg.h | 30 +++
5 files changed, 388 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 6935ff483a..8604dde995 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -350,6 +350,7 @@ struct flow_handle {
};
struct flow_pattern_template {
+ struct nic_flow_def *fd;
};
struct flow_actions_template {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 420f081178..111129a9ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1119,6 +1119,104 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
* Flow Asynchronous operation API
*/
+static struct flow_pattern_template *
+flow_pattern_template_create(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_pattern_template_create_profile_inline(dev, template_attr,
+ caller_id, pattern, error);
+}
+
+static int flow_pattern_template_destroy(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_pattern_template_destroy_profile_inline(dev,
+ pattern_template,
+ error);
+}
+
+static struct flow_actions_template *
+flow_actions_template_create(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_actions_template_create_profile_inline(dev, template_attr,
+ caller_id, actions, masks, error);
+}
+
+static int flow_actions_template_destroy(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_actions_template_destroy_profile_inline(dev,
+ actions_template,
+ error);
+}
+
+static struct flow_template_table *flow_template_table_create(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id, struct flow_pattern_template *pattern_templates[],
+ uint8_t nb_pattern_templates, struct flow_actions_template *actions_templates[],
+ uint8_t nb_actions_templates, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_template_table_create_profile_inline(dev, table_attr,
+ forced_vlan_vid, caller_id, pattern_templates, nb_pattern_templates,
+ actions_templates, nb_actions_templates, error);
+}
+
+static int flow_template_table_destroy(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_template_table_destroy_profile_inline(dev, template_table,
+ error);
+}
+
static struct flow_handle *
flow_async_create(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr, struct flow_template_table *template_table,
@@ -1188,6 +1286,12 @@ static const struct flow_filter_ops ops = {
*/
.flow_info_get = flow_info_get,
.flow_configure = flow_configure,
+ .flow_pattern_template_create = flow_pattern_template_create,
+ .flow_pattern_template_destroy = flow_pattern_template_destroy,
+ .flow_actions_template_create = flow_actions_template_create,
+ .flow_actions_template_destroy = flow_actions_template_destroy,
+ .flow_template_table_create = flow_template_table_create,
+ .flow_template_table_destroy = flow_template_table_destroy,
.flow_async_create = flow_async_create,
.flow_async_destroy = flow_async_destroy,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d97206614b..89e7041350 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -5507,6 +5507,223 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
+struct flow_pattern_template *flow_pattern_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ (void)template_attr;
+ (void)caller_id;
+ uint32_t port_id = 0;
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ return NULL;
+ }
+
+ /* Note that forced_vlan_vid is unavailable at this point in time */
+ int res = interpret_flow_elements(dev, pattern, fd, error, 0, &port_id, packet_data,
+ packet_mask, &key_def);
+
+ if (res) {
+ free(fd);
+ return NULL;
+ }
+
+ struct flow_pattern_template *template = calloc(1, sizeof(struct flow_pattern_template));
+
+ template->fd = fd;
+
+ return template;
+}
+
+int flow_pattern_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ free(pattern_template->fd);
+ free(pattern_template);
+
+ return 0;
+}
+
+struct flow_actions_template *
+flow_actions_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[],
+ const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ (void)template_attr;
+ int res;
+
+ uint32_t num_dest_port = 0;
+ uint32_t num_queues = 0;
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ return NULL;
+ }
+
+ res = interpret_flow_actions(dev, actions, masks, fd, error, &num_dest_port, &num_queues);
+
+ if (res) {
+ free(fd);
+ return NULL;
+ }
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+ res = flow_group_translate_get(dev->ndev->group_handle, caller_id,
+ dev->port, fd->jump_to_group, &fd->jump_to_group);
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (res) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ free(fd);
+ return NULL;
+ }
+ }
+
+ struct flow_actions_template *template = calloc(1, sizeof(struct flow_actions_template));
+
+ template->fd = fd;
+ template->num_dest_port = num_dest_port;
+ template->num_queues = num_queues;
+
+ return template;
+}
+
+int flow_actions_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ free(actions_template->fd);
+ free(actions_template);
+
+ return 0;
+}
+
+struct flow_template_table *flow_template_table_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct flow_template_table *template_table = calloc(1, sizeof(struct flow_template_table));
+
+ if (template_table == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate template_table";
+ goto error_out;
+ }
+
+ template_table->pattern_templates =
+ malloc(sizeof(struct flow_pattern_template *) * nb_pattern_templates);
+ template_table->actions_templates =
+ malloc(sizeof(struct flow_actions_template *) * nb_actions_templates);
+ template_table->pattern_action_pairs =
+ calloc((uint32_t)nb_pattern_templates * nb_actions_templates,
+ sizeof(struct flow_template_table_cell));
+
+ if (template_table->pattern_templates == NULL ||
+ template_table->actions_templates == NULL ||
+ template_table->pattern_action_pairs == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate template_table variables";
+ goto error_out;
+ }
+
+ template_table->attr.priority = table_attr->flow_attr.priority;
+ template_table->attr.group = table_attr->flow_attr.group;
+ template_table->forced_vlan_vid = forced_vlan_vid;
+ template_table->caller_id = caller_id;
+
+ template_table->nb_pattern_templates = nb_pattern_templates;
+ template_table->nb_actions_templates = nb_actions_templates;
+
+ memcpy(template_table->pattern_templates, pattern_templates,
+ sizeof(struct flow_pattern_template *) * nb_pattern_templates);
+ memcpy(template_table->actions_templates, actions_templates,
+ sizeof(struct rte_flow_actions_template *) * nb_actions_templates);
+
+ rte_spinlock_lock(&dev->ndev->mtx);
+ int res =
+ flow_group_translate_get(dev->ndev->group_handle, caller_id, dev->port,
+ template_table->attr.group, &template_table->attr.group);
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (res) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ return template_table;
+
+error_out:
+
+ if (template_table) {
+ free(template_table->pattern_templates);
+ free(template_table->actions_templates);
+ free(template_table->pattern_action_pairs);
+ free(template_table);
+ }
+
+ return NULL;
+}
+
+int flow_template_table_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ const uint32_t nb_cells =
+ template_table->nb_pattern_templates * template_table->nb_actions_templates;
+
+ for (uint32_t i = 0; i < nb_cells; ++i) {
+ struct flow_template_table_cell *cell = &template_table->pattern_action_pairs[i];
+
+ if (cell->flm_db_idx_counter > 0) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)cell->flm_db_idxs,
+ cell->flm_db_idx_counter);
+ }
+ }
+
+ free(template_table->pattern_templates);
+ free(template_table->actions_templates);
+ free(template_table->pattern_action_pairs);
+ free(template_table);
+
+ return 0;
+}
+
struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev,
uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
@@ -5757,6 +5974,14 @@ static const struct profile_inline_ops ops = {
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
.flow_info_get_profile_inline = flow_info_get_profile_inline,
.flow_configure_profile_inline = flow_configure_profile_inline,
+ .flow_pattern_template_create_profile_inline = flow_pattern_template_create_profile_inline,
+ .flow_pattern_template_destroy_profile_inline =
+ flow_pattern_template_destroy_profile_inline,
+ .flow_actions_template_create_profile_inline = flow_actions_template_create_profile_inline,
+ .flow_actions_template_destroy_profile_inline =
+ flow_actions_template_destroy_profile_inline,
+ .flow_template_table_create_profile_inline = flow_template_table_create_profile_inline,
+ .flow_template_table_destroy_profile_inline = flow_template_table_destroy_profile_inline,
.flow_async_create_profile_inline = flow_async_create_profile_inline,
.flow_async_destroy_profile_inline = flow_async_destroy_profile_inline,
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b548142342..0dc89085ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -73,6 +73,34 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
* RTE flow asynchronous operations functions
*/
+struct flow_pattern_template *flow_pattern_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+int flow_pattern_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+struct flow_actions_template *flow_actions_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error);
+
+int flow_actions_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+struct flow_template_table *flow_template_table_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+int flow_template_table_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e8e7090661..eb764356eb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -314,6 +314,36 @@ struct profile_inline_ops {
* RTE flow asynchronous operations functions
*/
+ struct flow_pattern_template *(*flow_pattern_template_create_profile_inline)
+ (struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+ int (*flow_pattern_template_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+ struct flow_actions_template *(*flow_actions_template_create_profile_inline)
+ (struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr,
+ uint16_t caller_id, const struct rte_flow_action actions[],
+ const struct rte_flow_action masks[], struct rte_flow_error *error);
+
+ int (*flow_actions_template_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+ struct flow_template_table *(*flow_template_table_create_profile_inline)
+ (struct flow_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr,
+ uint16_t forced_vlan_vid, uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+ int (*flow_template_table_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *(*flow_async_create_profile_inline)(struct flow_eth_dev *dev,
uint32_t queue_id, const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 84/86] net/ntnic: update async flow API documentation
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (82 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 83/86] net/ntnic: add async template APIs implementation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 85/86] net/ntnic: add MTU configuration Serhii Iliushyk
` (2 subsequent siblings)
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update ntnic.ini and release notes
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
2 files changed, 2 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index e0dfbefacb..3794b4f216 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -72,6 +72,7 @@ Features
- Extended stats
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
+- Asynchronous flow API
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 50cbebc33f..ebc237417e 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -165,6 +165,7 @@ New Features
* Added age rte flow action support
* Added meter flow metering and flow policy support
* Added flow actions update support
+ * Added asynchronous flow API support
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 85/86] net/ntnic: add MTU configuration
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (83 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 84/86] net/ntnic: update async flow API documentation Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 86/86] net/ntnic: update documentation for set MTU Serhii Iliushyk
2024-10-30 2:01 ` [PATCH v4 00/86] Provide flow filter API and statistics Ferruh Yigit
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add supporting API rte_eth_dev_set_mtu
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 7 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 96 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 82 +++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 9 ++
.../flow_api_profile_inline_config.h | 50 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 41 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +
8 files changed, 292 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 8604dde995..5eace2614f 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -280,6 +280,11 @@ struct nic_flow_def {
* AGE action timeout
*/
struct age_def_s age;
+
+ /*
+ * TX fragmentation IFR/RPP_LR MTU recipe
+ */
+ uint8_t flm_mtu_fragmentation_recipe;
};
enum flow_handle_type {
@@ -340,6 +345,8 @@ struct flow_handle {
uint8_t flm_qfi;
uint8_t flm_scrub_prof;
+ uint8_t flm_mtu_fragmentation_recipe;
+
/* Flow specific pointer to application template table cell stored during
* flow create.
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7a36e4c6d6..f91a3ed058 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -958,8 +958,12 @@ int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, i
uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index ba8f2d0dbb..2c3ed2355b 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -152,6 +152,54 @@ int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, i
return be->iface->tpe_rpp_ifr_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_ifr_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_ifr_categories)
+ return INDEX_TOO_LARGE;
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_IFR_RCP_IPV4_EN:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv4_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV4_DF_DROP:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv4_df_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_EN:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv6_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_DROP:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv6_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_MTU:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].mtu, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_ifr_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPP_RCP
*/
@@ -262,6 +310,54 @@ int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ifr_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ifr_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_ifr_categories)
+ return INDEX_TOO_LARGE;
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_IFR_RCP_IPV4_EN:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv4_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV4_DF_DROP:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv4_df_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_EN:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv6_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_DROP:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv6_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_MTU:
+ GET_SET(be->tpe.v3.ifr_rcp[index].mtu, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ifr_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* INS_RCP
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 89e7041350..42d4c19eaa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -803,6 +803,11 @@ static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned i
}
}
+static inline uint8_t convert_port_to_ifr_mtu_recipe(uint32_t port)
+{
+ return port + 1;
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -1023,6 +1028,8 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->rqi = fh->flm_rqi;
/* Lower 10 bits used for RPL EXT PTR */
learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+ /* Bit [13:10] used for MTU recipe */
+ learn_record->color |= (fh->flm_mtu_fragmentation_recipe & 0xf) << 10;
learn_record->ent = 0;
learn_record->op = flm_op & 0xf;
@@ -1121,6 +1128,9 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
fd->dst_id[fd->dst_num_avail].active = 1;
fd->dst_num_avail++;
+ fd->flm_mtu_fragmentation_recipe =
+ convert_port_to_ifr_mtu_recipe(port);
+
if (fd->full_offload < 0)
fd->full_offload = 1;
@@ -3070,6 +3080,8 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
break;
}
}
+
+ fh->flm_mtu_fragmentation_recipe = fd->flm_mtu_fragmentation_recipe;
fh->context = fd->age.context;
}
@@ -3187,7 +3199,7 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
/* Setup COT */
struct hw_db_inline_cot_data cot_data = {
.matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
- .frag_rcp = 0,
+ .frag_rcp = empty_pattern ? fd->flm_mtu_fragmentation_recipe : 0,
};
struct hw_db_cot_idx cot_idx =
hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
@@ -3501,7 +3513,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/* Setup COT */
struct hw_db_inline_cot_data cot_data = {
.matcher_color_contrib = 0,
- .frag_rcp = 0,
+ .frag_rcp = fd->flm_mtu_fragmentation_recipe,
};
struct hw_db_cot_idx cot_idx =
hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
@@ -5416,6 +5428,67 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_set_mtu_inline(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu)
+{
+ if (port >= 255)
+ return -1;
+
+ uint32_t ipv4_en_frag;
+ uint32_t ipv4_action;
+ uint32_t ipv6_en_frag;
+ uint32_t ipv6_action;
+
+ if (port == 0) {
+ ipv4_en_frag = PORT_0_IPV4_FRAGMENTATION;
+ ipv4_action = PORT_0_IPV4_DF_ACTION;
+ ipv6_en_frag = PORT_0_IPV6_FRAGMENTATION;
+ ipv6_action = PORT_0_IPV6_ACTION;
+
+ } else if (port == 1) {
+ ipv4_en_frag = PORT_1_IPV4_FRAGMENTATION;
+ ipv4_action = PORT_1_IPV4_DF_ACTION;
+ ipv6_en_frag = PORT_1_IPV6_FRAGMENTATION;
+ ipv6_action = PORT_1_IPV6_ACTION;
+
+ } else {
+ ipv4_en_frag = DISABLE_FRAGMENTATION;
+ ipv4_action = IPV4_DF_DROP;
+ ipv6_en_frag = DISABLE_FRAGMENTATION;
+ ipv6_action = IPV6_DROP;
+ }
+
+ int err = 0;
+ uint8_t ifr_mtu_recipe = convert_port_to_ifr_mtu_recipe(port);
+ struct flow_nic_dev *ndev = dev->ndev;
+
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_EN, ifr_mtu_recipe,
+ ipv4_en_frag);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_EN, ifr_mtu_recipe,
+ ipv6_en_frag);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_MTU, ifr_mtu_recipe, mtu);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_DF_DROP, ifr_mtu_recipe,
+ ipv4_action);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_DROP, ifr_mtu_recipe,
+ ipv6_action);
+
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_EN, ifr_mtu_recipe,
+ ipv4_en_frag);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_EN, ifr_mtu_recipe,
+ ipv6_en_frag);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_MTU, ifr_mtu_recipe, mtu);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_DF_DROP, ifr_mtu_recipe,
+ ipv4_action);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_DROP, ifr_mtu_recipe,
+ ipv6_action);
+
+ if (err == 0) {
+ err |= hw_mod_tpe_rpp_ifr_rcp_flush(&ndev->be, ifr_mtu_recipe, 1);
+ err |= hw_mod_tpe_ifr_rcp_flush(&ndev->be, ifr_mtu_recipe, 1);
+ }
+
+ return err;
+}
+
int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info,
struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
@@ -6000,6 +6073,11 @@ static const struct profile_inline_ops ops = {
.flm_free_queues = flm_free_queues,
.flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
+
+ /*
+ * Config API
+ */
+ .flow_set_mtu_inline = flow_set_mtu_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 0dc89085ec..ce1a0669ee 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,10 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+#define DISABLE_FRAGMENTATION 0
+#define IPV4_DF_DROP 1
+#define IPV6_DROP 1
+
/*
* Management
*/
@@ -120,4 +124,9 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_queue_attr *queue_attr[],
struct rte_flow_error *error);
+/*
+ * Config API
+ */
+int flow_set_mtu_inline(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
index 3b53288ddf..c665cab16a 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -6,6 +6,56 @@
#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+/*
+ * Per port configuration for IPv4 fragmentation and DF flag handling
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV4_FRAGMENTATION | IPV4_DF_ACTION || Exceeding MTU | DF flag || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - | - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_DROP || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_FORWARD || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV4_DF_ACTION IPV4_DF_DROP
+
+#define PORT_1_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV4_DF_ACTION IPV4_DF_DROP
+
+/*
+ * Per port configuration for IPv6 fragmentation
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV6_FRAGMENTATION | IPV6_ACTION || Exceeding MTU || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DROP || no || Forward ||
+ * || | || yes || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | FRAGMENT || no || Forward ||
+ * || | || yes || Fragment ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV6_ACTION IPV6_DROP
+
+#define PORT_1_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV6_ACTION IPV6_DROP
+
/*
* Statistics are generated each time the byte counter crosses a limit.
* If BYTE_LIMIT is zero then the byte counter does not trigger statistics
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 77436eb02d..2a2643a106 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -39,6 +39,7 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
+#define MIN_MTU_INLINE 512
#define EXCEPTION_PATH_HID 0
@@ -70,6 +71,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+#define MTUINITVAL 1500
+
uint64_t rte_tsc_freq;
static void (*previous_handler)(int sig);
@@ -338,6 +341,7 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_mtu = MAX_MTU;
if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->min_mtu = MIN_MTU_INLINE;
dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
dev_info->hash_key_size = MAX_RSS_KEY_LEN;
@@ -1149,6 +1153,26 @@ static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev,
return 0;
}
+static int dev_set_mtu_inline(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_eth_dev *flw_dev = internals->flw_dev;
+ int ret = -1;
+
+ if (internals->type == PORT_TYPE_PHYSICAL && mtu >= MIN_MTU_INLINE && mtu <= MAX_MTU)
+ ret = profile_inline_ops->flow_set_mtu_inline(flw_dev, internals->port, mtu);
+
+ return ret ? -EINVAL : 0;
+}
+
static int eth_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -1714,6 +1738,7 @@ static struct eth_dev_ops nthw_eth_dev_ops = {
.xstats_reset = eth_xstats_reset,
.xstats_get_by_id = eth_xstats_get_by_id,
.xstats_get_names_by_id = eth_xstats_get_names_by_id,
+ .mtu_set = NULL,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
@@ -2277,6 +2302,7 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
internals->pci_dev = pci_dev;
internals->n_intf_no = n_intf_no;
internals->type = PORT_TYPE_PHYSICAL;
+ internals->port = n_intf_no;
internals->nb_rx_queues = nb_rx_queues;
internals->nb_tx_queues = nb_tx_queues;
@@ -2386,6 +2412,21 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+ if (get_flow_filter_ops() != NULL) {
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE &&
+ internals->flw_dev->ndev->be.tpe.ver >= 2) {
+ assert(nthw_eth_dev_ops.mtu_set == dev_set_mtu_inline ||
+ nthw_eth_dev_ops.mtu_set == NULL);
+ nthw_eth_dev_ops.mtu_set = dev_set_mtu_inline;
+ dev_set_mtu_inline(eth_dev, MTUINITVAL);
+ NT_LOG_DBGX(DBG, NTNIC, "INLINE MTU supported, tpe version %d",
+ internals->flw_dev->ndev->be.tpe.ver);
+
+ } else {
+ NT_LOG(DBG, NTNIC, "INLINE MTU not supported");
+ }
+ }
+
/* Port event thread */
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index eb764356eb..71861c6dea 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -408,6 +408,11 @@ struct profile_inline_ops {
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[],
struct rte_flow_error *error);
+
+ /*
+ * Config API
+ */
+ int (*flow_set_mtu_inline)(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v4 86/86] net/ntnic: update documentation for set MTU
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (84 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 85/86] net/ntnic: add MTU configuration Serhii Iliushyk
@ 2024-10-29 16:42 ` Serhii Iliushyk
2024-10-30 2:01 ` [PATCH v4 00/86] Provide flow filter API and statistics Ferruh Yigit
86 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-29 16:42 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update ntnic.rst and release notes
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
2 files changed, 2 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 3794b4f216..bb927f9505 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -73,6 +73,7 @@ Features
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
- Asynchronous flow API
+- MTU update
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index ebc237417e..6c8dceb264 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -166,6 +166,7 @@ New Features
* Added meter flow metering and flow policy support
* Added flow actions update support
* Added asynchronous flow API support
+ * Added MTU update
* **Added cryptodev queue pair reset support.**
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev
2024-10-29 16:41 ` [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
@ 2024-10-30 1:54 ` Ferruh Yigit
0 siblings, 0 replies; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 1:54 UTC (permalink / raw)
To: Serhii Iliushyk, dev
Cc: mko-plv, ckm, andrew.rybchenko, stephen, Danylo Vodopianov
On 10/29/2024 4:41 PM, Serhii Iliushyk wrote:
> From: Danylo Vodopianov <dvo-plv@napatech.com>
>
> This API allows to enable of flow profile for NT SmartNIC
>
> Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
>
<...>
> + for (i = 0; i < alloc_rx_queues; i++) {
> +#ifdef SCATTER_GATHER
> + eth_dev->rx_queue[i] = queue_ids[i];
> +#else
> + int queue_id = flow_nic_alloc_resource(ndev, RES_QUEUE, 1);
> +
> + if (queue_id < 0) {
> + NT_LOG(ERR, FILTER, "ERROR: no more free queue IDs in NIC");
> + goto err_exit0;
> + }
> +
> + eth_dev->rx_queue[eth_dev->num_queues].id = (uint8_t)queue_id;
> + eth_dev->rx_queue[eth_dev->num_queues].hw_id =
> + ndev->be.iface->alloc_rx_queue(ndev->be.be_dev,
> + eth_dev->rx_queue[eth_dev->num_queues].id);
> +
> + if (eth_dev->rx_queue[eth_dev->num_queues].hw_id < 0) {
> + NT_LOG(ERR, FILTER, "ERROR: could not allocate a new queue");
> + goto err_exit0;
> + }
> +
> + if (queue_ids)
> + queue_ids[eth_dev->num_queues] = eth_dev->rx_queue[eth_dev->num_queues];
> +#endif
>
There is no clear way to set this build time configuration, hence easy
to miss testing it which leads to dead/broken code by time.
That is why it is better to convert these compile time configuration to
dynamic configuration.
For this set, are you OK to go with default value of the macro, for
SCATTER_GATHER as far as I can see it is defined by default?
Same for FLOW_DEBUG below. I can see it is used for log, in that case
dynamic logging should be used.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 15/86] net/ntnic: add item IPv4
2024-10-29 16:41 ` [PATCH v4 15/86] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-30 1:55 ` Ferruh Yigit
0 siblings, 0 replies; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 1:55 UTC (permalink / raw)
To: Serhii Iliushyk, dev; +Cc: mko-plv, ckm, andrew.rybchenko, stephen
On 10/29/2024 4:41 PM, Serhii Iliushyk wrote:
> Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4
>
> Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
> ---
> doc/guides/nics/features/ntnic.ini | 1 +
> .../profile_inline/flow_api_profile_inline.c | 162 ++++++++++++++++++
> 2 files changed, 163 insertions(+)
>
> diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
> index 36b8212bae..bae25d2e2d 100644
> --- a/doc/guides/nics/features/ntnic.ini
> +++ b/doc/guides/nics/features/ntnic.ini
> @@ -16,6 +16,7 @@ x86-64 = Y
> [rte_flow items]
> any = Y
> eth = Y
> +ipv4 = Y
>
> [rte_flow actions]
> drop = Y
> diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> index 93f666a054..d5d853351e 100644
> --- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> @@ -664,7 +664,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
>
> break;
>
> +
> + case RTE_FLOW_ITEM_TYPE_IPV4:
> + NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
> dev->ndev->adapter_no, dev->port);
> + {
> + const struct rte_flow_item_ipv4 *ipv4_spec =
> + (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
> + const struct rte_flow_item_ipv4 *ipv4_mask =
> + (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
> +
> + if (ipv4_spec == NULL || ipv4_mask == NULL) {
> + if (any_count > 0 || fd->l3_prot != -1)
> + fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
> + else
> + fd->l3_prot = PROT_L3_IPV4;
> + break;
> + }
> +
> + if (ipv4_mask->hdr.version_ihl != 0 ||
> + ipv4_mask->hdr.type_of_service != 0 ||
> + ipv4_mask->hdr.total_length != 0 ||
> + ipv4_mask->hdr.packet_id != 0 ||
> + (ipv4_mask->hdr.fragment_offset != 0 &&
> + (ipv4_spec->hdr.fragment_offset != 0xffff ||
> + ipv4_mask->hdr.fragment_offset != 0xffff)) ||
> + ipv4_mask->hdr.time_to_live != 0 ||
> + ipv4_mask->hdr.hdr_checksum != 0) {
> + NT_LOG(ERR, FILTER,
> + "Requested IPv4 field not support by running SW version.");
> + flow_nic_set_error(ERR_FAILED, error);
> + return -1;
> + }
> +
> + if (ipv4_spec->hdr.fragment_offset == 0xffff &&
> + ipv4_mask->hdr.fragment_offset == 0xffff) {
> + fd->fragmentation = 0xfe;
> + }
> +
> + int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
> + (ipv4_mask->hdr.dst_addr != 0) +
> + (ipv4_mask->hdr.next_proto_id != 0);
> +
> + if (match_cnt <= 0) {
> + if (any_count > 0 || fd->l3_prot != -1)
> + fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
> + else
> + fd->l3_prot = PROT_L3_IPV4;
> + break;
> + }
> +
> + if (qw_free > 0 &&
> + (match_cnt >= 2 ||
> + (match_cnt == 1 && sw_counter >= 2))) {
> + if (qw_counter >= 2) {
> + NT_LOG(ERR, FILTER,
> + "Key size too big. Out of QW resources.");
> + flow_nic_set_error(ERR_FAILED,
> + error);
> + return -1;
> + }
> +
> + uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
> + uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
> +
> + qw_mask[0] = 0;
> + qw_data[0] = 0;
> +
> + qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
> + qw_data[1] = ipv4_spec->hdr.next_proto_id
> + << 16 & qw_mask[1];
> +
> + qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
> + qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
> +
> + qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
> + qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
> +
> + km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
> + any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
> + set_key_def_qw(key_def, qw_counter, any_count > 0
> + ? DYN_TUN_L3 : DYN_L3, 4);
> + qw_counter += 1;
> + qw_free -= 1;
> +
> + if (any_count > 0 || fd->l3_prot != -1)
> + fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
> + else
> + fd->l3_prot = PROT_L3_IPV4;
> + break;
> + }
> +
> + if (ipv4_mask->hdr.src_addr) {
> + if (sw_counter >= 2) {
> + NT_LOG(ERR, FILTER,
> + "Key size too big. Out of SW resources.");
> + flow_nic_set_error(ERR_FAILED, error);
> + return -1;
> + }
> +
> + uint32_t *sw_data = &packet_data[1 - sw_counter];
> + uint32_t *sw_mask = &packet_mask[1 - sw_counter];
> +
> + sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
> + sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
> +
> + km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
> + any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
> + set_key_def_sw(key_def, sw_counter, any_count > 0
> + ? DYN_TUN_L3 : DYN_L3, 12);
> + sw_counter += 1;
> + }
> +
> + if (ipv4_mask->hdr.dst_addr) {
> + if (sw_counter >= 2) {
> + NT_LOG(ERR, FILTER,
> + "Key size too big. Out of SW resources.");
> + flow_nic_set_error(ERR_FAILED, error);
> + return -1;
> + }
> +
> + uint32_t *sw_data = &packet_data[1 - sw_counter];
> + uint32_t *sw_mask = &packet_mask[1 - sw_counter];
> +
> + sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
> + sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
> +
> + km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
> + any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
> + set_key_def_sw(key_def, sw_counter, any_count > 0
> + ? DYN_TUN_L3 : DYN_L3, 16);
> + sw_counter += 1;
> + }
> +
> + if (ipv4_mask->hdr.next_proto_id) {
> + if (sw_counter >= 2) {
> + NT_LOG(ERR, FILTER,
> + "Key size too big. Out of SW resources.");
> + flow_nic_set_error(ERR_FAILED, error);
> + return -1;
> + }
> +
> + uint32_t *sw_data = &packet_data[1 - sw_counter];
> + uint32_t *sw_mask = &packet_mask[1 - sw_counter];
> +
> + sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
> + sw_data[0] = ipv4_spec->hdr.next_proto_id
> + << 16 & sw_mask[0];
> +
> + km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
> + any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
> + set_key_def_sw(key_def, sw_counter, any_count > 0
> + ? DYN_TUN_L3 : DYN_L3, 8);
> + sw_counter += 1;
> + }
> +
> + if (any_count > 0 || fd->l3_prot != -1)
> + fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
> +
> + else
> + fd->l3_prot = PROT_L3_IPV4;
> + }
> +
> + break;
> break;
Redundant 'break'.
>
> default:
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 64/86] net/ntnic: update documentation
2024-10-29 16:42 ` [PATCH v4 64/86] net/ntnic: update documentation Serhii Iliushyk
@ 2024-10-30 1:55 ` Ferruh Yigit
0 siblings, 0 replies; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 1:55 UTC (permalink / raw)
To: Serhii Iliushyk, dev
Cc: mko-plv, ckm, andrew.rybchenko, stephen, Oleksandr Kolomeiets
On 10/29/2024 4:42 PM, Serhii Iliushyk wrote:
> From: Oleksandr Kolomeiets <okl-plv@napatech.com>
>
> Update required documentation
>
> Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
> ---
> doc/guides/nics/ntnic.rst | 30 ++++++++++++++++++++++++++
> doc/guides/rel_notes/release_24_11.rst | 2 ++
> 2 files changed, 32 insertions(+)
>
> diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
> index 2c160ae592..e7e1cbcff7 100644
> --- a/doc/guides/nics/ntnic.rst
> +++ b/doc/guides/nics/ntnic.rst
> @@ -40,6 +40,36 @@ Features
> - Unicast MAC filter
> - Multicast MAC filter
> - Promiscuous mode (Enable only. The device always run promiscuous mode)
> +- Multiple TX and RX queues.
> +- Scattered and gather for TX and RX.
> +- RSS hash
> +- RSS key update
> +- RSS based on VLAN or 5-tuple.
> +- RSS using different combinations of fields: L3 only, L4 only or both, and
> + source only, destination only or both.
> +- Several RSS hash keys, one for each flow type.
> +- Default RSS operation with no hash key specification.
> +- VLAN filtering.
> +- RX VLAN stripping via raw decap.
> +- TX VLAN insertion via raw encap.
> +- Flow API.
> +- Multiple process.
> +- Tunnel types: GTP.
> +- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
> + verification.
> +- Support for multiple rte_flow groups.
> +- Encapsulation and decapsulation of GTP data.
> +- Packet modification: NAT, TTL decrement, DSCP tagging
> +- Traffic mirroring.
> +- Jumbo frame support.
> +- Port and queue statistics.
> +- RMON statistics in extended stats.
> +- Flow metering, including meter policy API.
> +- Link state information.
> +- CAM and TCAM based matching.
> +- Exact match of 140 million flows and policies.
> +- Basic stats
> +- Extended stats
>
Instead of having a separate commit, can you please distribute document
update to the patch that adds the documented feature?
>
> Limitations
> ~~~~~~~~~~~
> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
> index fa4822d928..75769d1992 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -160,6 +160,8 @@ New Features
> * Added NT flow backend initialization.
> * Added initialization of FPGA modules related to flow HW offload.
> * Added basic handling of the virtual queues.
> + * Added flow handling API
> + * Added statistics API
>
> * **Added cryptodev queue pair reset support.**
>
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile
2024-10-29 16:41 ` [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-30 1:56 ` Ferruh Yigit
2024-10-30 21:08 ` Serhii Iliushyk
0 siblings, 1 reply; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 1:56 UTC (permalink / raw)
To: Serhii Iliushyk, dev; +Cc: mko-plv, ckm, andrew.rybchenko, stephen
On 10/29/2024 4:41 PM, Serhii Iliushyk wrote:
> The flow profile implements a all flow related operations
>
Can you please give some more details about the profiles, and "inline
profile" mentioned?
> Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
> ---
> drivers/net/ntnic/include/flow_api.h | 15 +++++
> drivers/net/ntnic/meson.build | 1 +
> drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
> .../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
> .../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
> drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
> drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
> 7 files changed, 174 insertions(+), 3 deletions(-)
> create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
>
> diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
> index c80906ec50..3bdfdd4f94 100644
> --- a/drivers/net/ntnic/include/flow_api.h
> +++ b/drivers/net/ntnic/include/flow_api.h
> @@ -74,6 +74,21 @@ struct flow_nic_dev {
> struct flow_nic_dev *next;
> };
>
> +enum flow_nic_err_msg_e {
> + ERR_SUCCESS = 0,
> + ERR_FAILED = 1,
> + ERR_OUTPUT_TOO_MANY = 3,
> + ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
> + ERR_MATCH_RESOURCE_EXHAUSTION = 14,
> + ERR_ACTION_UNSUPPORTED = 28,
> + ERR_REMOVE_FLOW_FAILED = 29,
> + ERR_OUTPUT_INVALID = 33,
> + ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
> + ERR_MSG_NO_MSG
> +};
> +
> +void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
> +
> /*
> * Resources
> */
> diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
> index d272c73c62..f5605e81cb 100644
> --- a/drivers/net/ntnic/meson.build
> +++ b/drivers/net/ntnic/meson.build
> @@ -47,6 +47,7 @@ sources = files(
> 'nthw/core/nthw_sdc.c',
> 'nthw/core/nthw_si5340.c',
> 'nthw/flow_api/flow_api.c',
> + 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
> 'nthw/flow_api/flow_backend/flow_backend.c',
> 'nthw/flow_api/flow_filter.c',
> 'nthw/flow_api/flow_kcc.c',
> diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
> index d779dc481f..d0dad8e8f8 100644
> --- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
> +++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
> @@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
> static struct flow_nic_dev *dev_base;
> static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
>
> +/*
> + * Error handling
> + */
> +
> +static const struct {
> + const char *message;
> +} err_msg[] = {
> + /* 00 */ { "Operation successfully completed" },
> + /* 01 */ { "Operation failed" },
> + /* 29 */ { "Removing flow failed" },
> +};
> +
> +void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
> +{
> + assert(msg < ERR_MSG_NO_MSG);
> +
> + if (error) {
> + error->message = err_msg[msg].message;
> + error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
> + }
> +}
> +
> /*
> * Resources
> */
> @@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
> return NULL;
> }
>
> - return NULL;
> + return profile_inline_ops->flow_create_profile_inline(dev, attr,
> + forced_vlan_vid, caller_id, item, action, error);
> }
>
> static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
> @@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
> return -1;
> }
>
> - return -1;
> + return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
> }
>
> /*
> diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> new file mode 100644
> index 0000000000..a6293f5f82
> --- /dev/null
> +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
> @@ -0,0 +1,65 @@
> +/*
> + * SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Napatech A/S
> + */
> +
> +#include "ntlog.h"
> +
> +#include "flow_api_profile_inline.h"
> +#include "ntnic_mod_reg.h"
> +
> +struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
> + const struct rte_flow_attr *attr,
> + uint16_t forced_vlan_vid,
> + uint16_t caller_id,
> + const struct rte_flow_item elem[],
> + const struct rte_flow_action action[],
> + struct rte_flow_error *error)
> +{
> + return NULL;
> +}
> +
> +int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
> + struct flow_handle *fh,
> + struct rte_flow_error *error)
> +{
> + assert(dev);
> + assert(fh);
> +
> + int err = 0;
> +
> + flow_nic_set_error(ERR_SUCCESS, error);
> +
> + return err;
> +}
> +
> +int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
> + struct rte_flow_error *error)
> +{
> + int err = 0;
> +
> + flow_nic_set_error(ERR_SUCCESS, error);
> +
> + if (flow) {
> + /* Delete this flow */
> + pthread_mutex_lock(&dev->ndev->mtx);
> + err = flow_destroy_locked_profile_inline(dev, flow, error);
> + pthread_mutex_unlock(&dev->ndev->mtx);
> + }
> +
> + return err;
> +}
> +
> +static const struct profile_inline_ops ops = {
> + /*
> + * Flow functionality
> + */
> + .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
> + .flow_create_profile_inline = flow_create_profile_inline,
> + .flow_destroy_profile_inline = flow_destroy_profile_inline,
> +};
> +
> +void profile_inline_init(void)
> +{
> + register_profile_inline_ops(&ops);
> +}
> diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
> new file mode 100644
> index 0000000000..a83cc299b4
> --- /dev/null
> +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
> @@ -0,0 +1,33 @@
> +/*
> + * SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Napatech A/S
> + */
> +
> +#ifndef _FLOW_API_PROFILE_INLINE_H_
> +#define _FLOW_API_PROFILE_INLINE_H_
> +
> +#include <stdint.h>
> +
> +#include "flow_api.h"
> +#include "stream_binary_flow_api.h"
> +
> +/*
> + * Flow functionality
> + */
> +int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
> + struct flow_handle *fh,
> + struct rte_flow_error *error);
> +
> +struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
> + const struct rte_flow_attr *attr,
> + uint16_t forced_vlan_vid,
> + uint16_t caller_id,
> + const struct rte_flow_item elem[],
> + const struct rte_flow_action action[],
> + struct rte_flow_error *error);
> +
> +int flow_destroy_profile_inline(struct flow_eth_dev *dev,
> + struct flow_handle *flow,
> + struct rte_flow_error *error);
> +
> +#endif /* _FLOW_API_PROFILE_INLINE_H_ */
> diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
> index ad2266116f..593b56bf5b 100644
> --- a/drivers/net/ntnic/ntnic_mod_reg.c
> +++ b/drivers/net/ntnic/ntnic_mod_reg.c
> @@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
> return flow_backend_ops;
> }
>
> +static const struct profile_inline_ops *profile_inline_ops;
> +
> +void register_profile_inline_ops(const struct profile_inline_ops *ops)
> +{
> + profile_inline_ops = ops;
> +}
> +
> const struct profile_inline_ops *get_profile_inline_ops(void)
> {
> - return NULL;
> + if (profile_inline_ops == NULL)
> + profile_inline_init();
> +
> + return profile_inline_ops;
> }
>
> static const struct flow_filter_ops *flow_filter_ops;
> diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
> index ec8c1612d1..d133336fad 100644
> --- a/drivers/net/ntnic/ntnic_mod_reg.h
> +++ b/drivers/net/ntnic/ntnic_mod_reg.h
> @@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
> const struct flow_backend_ops *get_flow_backend_ops(void);
> void flow_backend_init(void);
>
> +struct profile_inline_ops {
> + /*
> + * Flow functionality
> + */
> + int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
> + struct flow_handle *fh,
> + struct rte_flow_error *error);
> +
> + struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
> + const struct rte_flow_attr *attr,
> + uint16_t forced_vlan_vid,
> + uint16_t caller_id,
> + const struct rte_flow_item elem[],
> + const struct rte_flow_action action[],
> + struct rte_flow_error *error);
> +
> + int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
> + struct flow_handle *flow,
> + struct rte_flow_error *error);
> +};
> +
> +void register_profile_inline_ops(const struct profile_inline_ops *ops);
> const struct profile_inline_ops *get_profile_inline_ops(void);
> +void profile_inline_init(void);
>
> struct flow_filter_ops {
> int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 70/86] net/ntnic: add aging documentation
2024-10-29 16:42 ` [PATCH v4 70/86] net/ntnic: add aging documentation Serhii Iliushyk
@ 2024-10-30 1:56 ` Ferruh Yigit
0 siblings, 0 replies; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 1:56 UTC (permalink / raw)
To: Serhii Iliushyk, dev
Cc: mko-plv, ckm, andrew.rybchenko, stephen, Danylo Vodopianov
On 10/29/2024 4:42 PM, Serhii Iliushyk wrote:
> From: Danylo Vodopianov <dvo-plv@napatech.com>
>
> ntnic.rst document was exntede with age feature specification.
> ntnic.ini was extended with rte_flow action age support.
>
> Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
> ---
> doc/guides/nics/features/ntnic.ini | 1 +
> doc/guides/nics/ntnic.rst | 18 ++++++++++++++++++
> doc/guides/rel_notes/release_24_11.rst | 1 +
> 3 files changed, 20 insertions(+)
>
> diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
> index 947c7ba3a1..af2981ccf6 100644
> --- a/doc/guides/nics/features/ntnic.ini
> +++ b/doc/guides/nics/features/ntnic.ini
> @@ -33,6 +33,7 @@ udp = Y
> vlan = Y
>
> [rte_flow actions]
> +age = Y
> drop = Y
> jump = Y
> mark = Y
> diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
> index e7e1cbcff7..e5a8d71892 100644
> --- a/doc/guides/nics/ntnic.rst
> +++ b/doc/guides/nics/ntnic.rst
> @@ -148,3 +148,21 @@ FILTER
> To enable logging on all levels use wildcard in the following way::
>
> --log-level=pmd.net.ntnic.*,8
> +
> +Flow Scanner
> +------------
> +
> +Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
> +When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
> +``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
> +
> +Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
> +
> +There are list of characteristics that ``age timeout`` action has:
> + - functions only in group > 0;
> + - flow timeout is specified in seconds;
> + - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
> + - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
> + - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
> + - after flow is aged-out it's not automatically deleted;
> + - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
> index 75769d1992..b449b01dc8 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -162,6 +162,7 @@ New Features
> * Added basic handling of the virtual queues.
> * Added flow handling API
> * Added statistics API
> + * Added age rte flow action support
>
Similar comment as previous, please merge this patch with the patch that
introduces the flow aging functionality.
Same for "meter documentation" patch, "documentation for flow actions
update" patch, "flow API documentation" patch and "documentation for set
MTU" patch.
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 00/86] Provide flow filter API and statistics
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
` (85 preceding siblings ...)
2024-10-29 16:42 ` [PATCH v4 86/86] net/ntnic: update documentation for set MTU Serhii Iliushyk
@ 2024-10-30 2:01 ` Ferruh Yigit
86 siblings, 0 replies; 405+ messages in thread
From: Ferruh Yigit @ 2024-10-30 2:01 UTC (permalink / raw)
To: Serhii Iliushyk, dev; +Cc: mko-plv, ckm, andrew.rybchenko, stephen
On 10/29/2024 4:41 PM, Serhii Iliushyk wrote:
> The list of updates provided by the patchset:
> - FW version
> - Speed capabilities
> - Link status (Link update only)
> - Unicast MAC filter
> - Multicast MAC filter
> - Promiscuous mode (Enable only. The device always run promiscuous mode)
> - Multiple TX and RX queues.
> - Scattered and gather for TX and RX.
> - RSS hash
> - RSS key update
> - RSS based on VLAN or 5-tuple.
> - RSS using different combinations of fields: L3 only, L4 only or both, and
> source only, destination only or both.
> - Several RSS hash keys, one for each flow type.
> - Default RSS operation with no hash key specification.
> - VLAN filtering.
> - RX VLAN stripping via raw decap.
> - TX VLAN insertion via raw encap.
> - Flow API.
> - Multiple process.
> - Tunnel types: GTP.
> - Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
> verification.
> - Support for multiple rte_flow groups.
> - Encapsulation and decapsulation of GTP data.
> - Packet modification: NAT, TTL decrement, DSCP tagging
> - Traffic mirroring.
> - Jumbo frame support.
> - Port and queue statistics.
> - RMON statistics in extended stats.
> - Flow metering, including meter policy API.
> - Link state information.
> - CAM and TCAM based matching.
> - Exact match of 140 million flows and policies.
> - Basic stats
> - Extended stats
> - Flow metering, including meter policy API.
> - Flow update. Update of the action list for specific flow
> - Asynchronous flow API
> - MTU update
>
> Update: the pthread API was replaced with RTE spinlock in the separate patch.
>
> Danylo Vodopianov (43):
> net/ntnic: add API for configuration NT flow dev
> net/ntnic: add item UDP
> net/ntnic: add action TCP
> net/ntnic: add action VLAN
> net/ntnic: add item SCTP
> net/ntnic: add items IPv6 and ICMPv6
> net/ntnic: add action modify filed
> net/ntnic: add items gtp and actions raw encap/decap
> net/ntnic: add cat module
> net/ntnic: add SLC LR module
> net/ntnic: add PDB module
> net/ntnic: add QSL module
> net/ntnic: add KM module
> net/ntnic: add hash API
> net/ntnic: add TPE module
> net/ntnic: add FLM module
> net/ntnic: add flm rcp module
> net/ntnic: add learn flow queue handling
> net/ntnic: match and action db attributes were added
> net/ntnic: add statistics API
> net/ntnic: add rpf module
> net/ntnic: add statistics poll
> net/ntnic: added flm stat interface
> net/ntnic: add tsm module
> net/ntnic: add xstats
> net/ntnic: added flow statistics
> net/ntnic: add scrub registers
> net/ntnic: add flow aging API
> net/ntnic: add aging API to the inline profile
> net/ntnic: add flow info and flow configure APIs
> net/ntnic: add flow aging event
> net/ntnic: add termination thread
> net/ntnic: add aging documentation
> net/ntnic: add meter API
> net/ntnic: add meter module
> net/ntnic: update meter documentation
> net/ntnic: add action update
> net/ntnic: add flow action update
> net/ntnic: flow update was added
> net/ntnic: add async create/destroy API declaration
> net/ntnic: add async template API declaration
> net/ntnic: add async flow create/delete API implementation
> net/ntnic: add async template APIs implementation
>
> Oleksandr Kolomeiets (18):
> net/ntnic: add flow dump feature
> net/ntnic: add flow flush
> net/ntnic: sort FPGA registers alphanumerically
> net/ntnic: add CSU module registers
> net/ntnic: add FLM module registers
> net/ntnic: add HFU module registers
> net/ntnic: add IFR module registers
> net/ntnic: add MAC Rx module registers
> net/ntnic: add MAC Tx module registers
> net/ntnic: add RPP LR module registers
> net/ntnic: add SLC LR module registers
> net/ntnic: add Tx CPY module registers
> net/ntnic: add Tx INS module registers
> net/ntnic: add Tx RPL module registers
> net/ntnic: add STA module
> net/ntnic: add TSM module
> net/ntnic: update documentation
> net/ntnic: add MTU configuration
>
> Serhii Iliushyk (25):
> net/ntnic: add flow filter API
> net/ntnic: add minimal create/destroy flow operations
> net/ntnic: add internal flow create/destroy API
> net/ntnic: add minimal NT flow inline profile
> net/ntnic: add management API for NT flow profile
> net/ntnic: add NT flow profile management implementation
> net/ntnic: add create/destroy implementation for NT flows
> net/ntnic: add infrastructure for for flow actions and items
> net/ntnic: add action queue
> net/ntnic: add action mark
> net/ntnic: add ation jump
> net/ntnic: add action drop
> net/ntnic: add item eth
> net/ntnic: add item IPv4
> net/ntnic: add item ICMP
> net/ntnic: add item port ID
> net/ntnic: add item void
> net/ntnic: add GMF (Generic MAC Feeder) module
> net/ntnic: update alignment for virt queue structs
> net/ntnic: enable RSS feature
> net/ntnic: update documentation for flow actions update
> net/ntnic: migrate to the RTE spinlock
> net/ntnic: remove unnecessary type cast
> net/ntnic: update async flow API documentation
> net/ntnic: update documentation for set MTU
>
Hi Serhii,
After each patch, driver should build fine, this is required for git
bisect testing, but in this patch series build fails after some patches,
can you please check?
Also a minor comment, in some patch titles 'API' is used, in DPDK scope
API is functions exposed from library to application use, so almost all
use of API is wrong, can you please update them?
Like "net/ntnic: add flow filter API" can become "net/ntnic: support
flow filter"
^ permalink raw reply [flat|nested] 405+ messages in thread
* Re: [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile
2024-10-30 1:56 ` Ferruh Yigit
@ 2024-10-30 21:08 ` Serhii Iliushyk
0 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:08 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Mykola Kostenok, Christian Koue Muf, andrew.rybchenko, stephen
>On 30.10.2024, 03:56, "Ferruh Yigit" wrote:
>
>On 10/29/2024 4:41 PM, Serhii Iliushyk wrote:
>> The flow profile implements a all flow related operations
>>
>
>
>Can you please give some more details about the profiles, and "inline
>profile" mentioned?
>
>
>> Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com <mailto:sil-plv@napatech.com>>
>> ---
>> drivers/net/ntnic/include/flow_api.h | 15 +++++
>> drivers/net/ntnic/https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,3-LUdenxfqOoBy-HbMvK5R_o9qtMZZNE0F19ALnszQbAj5cUU-GdmZiycNh09BY4nVE_Qlw-Pr13vddTJpmb_0oBZCfzAMcNRtVlIfRA&typo=1 <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,3-LUdenxfqOoBy-HbMvK5R_o9qtMZZNE0F19ALnszQbAj5cUU-GdmZiycNh09BY4nVE_Qlw-Pr13vddTJpmb_0oBZCfzAMcNRtVlIfRA&typo=1 > | 1 +
>> drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
>> .../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
>> .../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
>> drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
>> drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
>> 7 files changed, 174 insertions(+), 3 deletions(-)
>> create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
>> create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
>>
>> diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
>> index c80906ec50..3bdfdd4f94 100644
>> --- a/drivers/net/ntnic/include/flow_api.h
>> +++ b/drivers/net/ntnic/include/flow_api.h
>> @@ -74,6 +74,21 @@ struct flow_nic_dev {
>> struct flow_nic_dev *next;
>> };
>>
>> +enum flow_nic_err_msg_e {
>> + ERR_SUCCESS = 0,
>> + ERR_FAILED = 1,
>> + ERR_OUTPUT_TOO_MANY = 3,
>> + ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
>> + ERR_MATCH_RESOURCE_EXHAUSTION = 14,
>> + ERR_ACTION_UNSUPPORTED = 28,
>> + ERR_REMOVE_FLOW_FAILED = 29,
>> + ERR_OUTPUT_INVALID = 33,
>> + ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
>> + ERR_MSG_NO_MSG
>> +};
>> +
>> +void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
>> +
>> /*
>> * Resources
>> */
>> diff --git a/drivers/net/ntnic/https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,JkeJQE_rpALNHr8SqBaVRLlDDbEL5EdOBXNqajuwseGrtrDcbtJEThEZxS8SOYA81WEBENa4Y3YVoU0q_IAzScErc1y1KOeKOG99MB9Xfls_jQyJnQ,,&typo=1 <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,JkeJQE_rpALNHr8SqBaVRLlDDbEL5EdOBXNqajuwseGrtrDcbtJEThEZxS8SOYA81WEBENa4Y3YVoU0q_IAzScErc1y1KOeKOG99MB9Xfls_jQyJnQ,,&typo=1> b/drivers/net/ntnic/https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,0M6paR3-EwN18O8adbL-tbzs-1s_8MHatakMugSL_h3LzyBrM9gdLleDiJDSh-akPcT9y4YbgcIw_odTVAvhaO6sFcomFQaVYHdXdzYM5vJHpbdY&typo=1 <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,0M6paR3-EwN18O8adbL-tbzs-1s_8MHatakMugSL_h3LzyBrM9gdLleDiJDSh-akPcT9y4YbgcIw_odTVAvhaO6sFcomFQaVYHdXdzYM5vJHpbdY&typo=1>
>> index d272c73c62..f5605e81cb 100644
>> --- a/drivers/net/ntnic/https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,A4iLxNwVi_0mKCG6VrkXqjF93-MHs6mtooOFwRaFDaHTcSDkkK6guLOt1nG5JZNlJjv7l6PvxOokDrUSTYNSixtC8VdSvOP5Ze1-Epx52R3YyVQ0et9K6w,,&typo=1 <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,A4iLxNwVi_0mKCG6VrkXqjF93-MHs6mtooOFwRaFDaHTcSDkkK6guLOt1nG5JZNlJjv7l6PvxOokDrUSTYNSixtC8VdSvOP5Ze1-Epx52R3YyVQ0et9K6w,,&typo=1>
>> +++ b/drivers/net/ntnic/https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,VgoRksF64gHnqsTKesLw3bGf08ELIO9hZPpjBLt-omKH3u6vgrZl_u-Ww9dNMHJbawVJsQ0h6eYuZqQtne7BN6_EQDXfZN___5w0U_xS67C9YF59&typo=1 <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fmeson.build&c=E,1,VgoRksF64gHnqsTKesLw3bGf08ELIO9hZPpjBLt-omKH3u6vgrZl_u-Ww9dNMHJbawVJsQ0h6eYuZqQtne7BN6_EQDXfZN___5w0U_xS67C9YF59&typo=1>
>> @@ -47,6 +47,7 @@ sources = files(
>> 'nthw/core/nthw_sdc.c',
>> 'nthw/core/nthw_si5340.c',
>> 'nthw/flow_api/flow_api.c',
>> + 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
>> 'nthw/flow_api/flow_backend/flow_backend.c',
>> 'nthw/flow_api/flow_filter.c',
>> 'nthw/flow_api/flow_kcc.c',
>> diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
>> index d779dc481f..d0dad8e8f8 100644
>> --- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
>> +++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
>> @@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
>> static struct flow_nic_dev *dev_base;
>> static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
>>
>> +/*
>> + * Error handling
>> + */
>> +
>> +static const struct {
>> + const char *message;
>> +} err_msg[] = {
>> + /* 00 */ { "Operation successfully completed" },
>> + /* 01 */ { "Operation failed" },
>> + /* 29 */ { "Removing flow failed" },
>> +};
>> +
>> +void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
>> +{
>> + assert(msg < ERR_MSG_NO_MSG);
>> +
>> + if (error) {
>> + error->message = err_msg[msg].message;
>> + error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
>> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
>> + }
>> +}
>> +
>> /*
>> * Resources
>> */
>> @@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
>> return NULL;
>> }
>>
>> - return NULL;
>> + return profile_inline_ops->flow_create_profile_inline(dev, attr,
>> + forced_vlan_vid, caller_id, item, action, error);
>> }
>>
>> static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
>> @@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
>> return -1;
>> }
>>
>> - return -1;
>> + return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
>> }
>>
>> /*
>> diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
>> new file mode 100644
>> index 0000000000..a6293f5f82
>> --- /dev/null
>> +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
>> @@ -0,0 +1,65 @@
>> +/*
>> + * SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2023 Napatech A/S
>> + */
>> +
>> +#include "ntlog.h"
>> +
>> +#include "flow_api_profile_inline.h"
>> +#include "ntnic_mod_reg.h"
>> +
>> +struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
>> + const struct rte_flow_attr *attr,
>> + uint16_t forced_vlan_vid,
>> + uint16_t caller_id,
>> + const struct rte_flow_item elem[],
>> + const struct rte_flow_action action[],
>> + struct rte_flow_error *error)
>> +{
>> + return NULL;
>> +}
>> +
>> +int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
>> + struct flow_handle *fh,
>> + struct rte_flow_error *error)
>> +{
>> + assert(dev);
>> + assert(fh);
>> +
>> + int err = 0;
>> +
>> + flow_nic_set_error(ERR_SUCCESS, error);
>> +
>> + return err;
>> +}
>> +
>> +int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
>> + struct rte_flow_error *error)
>> +{
>> + int err = 0;
>> +
>> + flow_nic_set_error(ERR_SUCCESS, error);
>> +
>> + if (flow) {
>> + /* Delete this flow */
>> + pthread_mutex_lock(&dev->ndev->mtx);
>> + err = flow_destroy_locked_profile_inline(dev, flow, error);
>> + pthread_mutex_unlock(&dev->ndev->mtx);
>> + }
>> +
>> + return err;
>> +}
>> +
>> +static const struct profile_inline_ops ops = {
>> + /*
>> + * Flow functionality
>> + */
>> + .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
>> + .flow_create_profile_inline = flow_create_profile_inline,
>> + .flow_destroy_profile_inline = flow_destroy_profile_inline,
>> +};
>> +
>> +void profile_inline_init(void)
>> +{
>> + register_profile_inline_ops(&ops);
>> +}
>> diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
>> new file mode 100644
>> index 0000000000..a83cc299b4
>> --- /dev/null
>> +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
>> @@ -0,0 +1,33 @@
>> +/*
>> + * SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2023 Napatech A/S
>> + */
>> +
>> +#ifndef _FLOW_API_PROFILE_INLINE_H_
>> +#define _FLOW_API_PROFILE_INLINE_H_
>> +
>> +#include <stdint.h>
>> +
>> +#include "flow_api.h"
>> +#include "stream_binary_flow_api.h"
>> +
>> +/*
>> + * Flow functionality
>> + */
>> +int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
>> + struct flow_handle *fh,
>> + struct rte_flow_error *error);
>> +
>> +struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
>> + const struct rte_flow_attr *attr,
>> + uint16_t forced_vlan_vid,
>> + uint16_t caller_id,
>> + const struct rte_flow_item elem[],
>> + const struct rte_flow_action action[],
>> + struct rte_flow_error *error);
>> +
>> +int flow_destroy_profile_inline(struct flow_eth_dev *dev,
>> + struct flow_handle *flow,
>> + struct rte_flow_error *error);
>> +
>> +#endif /* _FLOW_API_PROFILE_INLINE_H_ */
>> diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
>> index ad2266116f..593b56bf5b 100644
>> --- a/drivers/net/ntnic/ntnic_mod_reg.c
>> +++ b/drivers/net/ntnic/ntnic_mod_reg.c
>> @@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
>> return flow_backend_ops;
>> }
>>
>> +static const struct profile_inline_ops *profile_inline_ops;
>> +
>> +void register_profile_inline_ops(const struct profile_inline_ops *ops)
>> +{
>> + profile_inline_ops = ops;
>> +}
>> +
>> const struct profile_inline_ops *get_profile_inline_ops(void)
>> {
>> - return NULL;
>> + if (profile_inline_ops == NULL)
>> + profile_inline_init();
>> +
>> + return profile_inline_ops;
>> }
>>
>> static const struct flow_filter_ops *flow_filter_ops;
>> diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
>> index ec8c1612d1..d133336fad 100644
>> --- a/drivers/net/ntnic/ntnic_mod_reg.h
>> +++ b/drivers/net/ntnic/ntnic_mod_reg.h
>> @@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
>> const struct flow_backend_ops *get_flow_backend_ops(void);
>> void flow_backend_init(void);
>>
>> +struct profile_inline_ops {
>> + /*
>> + * Flow functionality
>> + */
>> + int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
>> + struct flow_handle *fh,
>> + struct rte_flow_error *error);
>> +
>> + struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
>> + const struct rte_flow_attr *attr,
>> + uint16_t forced_vlan_vid,
>> + uint16_t caller_id,
>> + const struct rte_flow_item elem[],
>> + const struct rte_flow_action action[],
>> + struct rte_flow_error *error);
>> +
>> + int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
>> + struct flow_handle *flow,
>> + struct rte_flow_error *error);
>> +};
>> +
>> +void register_profile_inline_ops(const struct profile_inline_ops *ops);
>> const struct profile_inline_ops *get_profile_inline_ops(void);
>> +void profile_inline_init(void);
>>
>> struct flow_filter_ops {
>> int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
>
Hi Ferruh
Please find below the description of NT FPGA profiles.
The Napatech adapters support more different functionality than can fit into a single FPGA.
This functionality is grouped into a number of profiles called inline, capture, vswitch, and basic.
The adapter support we aim to upstream this time around, is for the inline profile.
The code contains some of the structures and enums needed to add other profiles,
mainly for future compatibility, and for compatibility with our legacy code base.
Here is a short description of each profile:
Inline: Uses a scatter gather packet system, which is quite fast and lightweight.
The FPGA contains functionality for hardware offload use-cases, such as stateful flow matching,
encap/decap, packet steering etc.
vSwitch: Also uses the scatter gather system. The FPGA features are selected to support
the functionality of OVS.
Capture: Uses Napatech’s proprietary packet buffer system,
which requires a lot of space on the FPGA, but is extremely fast, guarantees zero packet loss,
and zero copy packet handling on the host cpu.
Basic: The same feature set as one might expect from a basic nic.
Mostly used in Kubernetes containers to reduce complexity.
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 00/80] Provide flow filter and statistics support
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
` (75 preceding siblings ...)
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 01/80] net/ntnic: add NT flow dev configuration Serhii Iliushyk
` (79 more replies)
76 siblings, 80 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The list of updates provided by the patchset:
- FW version
- Speed capabilities
- Link status (Link update only)
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
- Flow API support.
- Support for multiple rte_flow groups.
- Multiple TX and RX queues.
- Scattered and gather for TX and RX.
- Jumbo frame support.
- Traffic mirroring.
- VLAN filtering.
- Packet modification: NAT, TTL decrement, DSCP tagging
- Tunnel types: GTP.
- Encapsulation and decapsulation of GTP data.
- RX VLAN stripping via raw decap.
- TX VLAN insertion via raw encap.
- CAM and TCAM based matching.
- Exact match of 140 million flows and policies.
- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
verification.
- RSS hash
- RSS key update
- RSS based on VLAN or 5-tuple.
- RSS using different combinations of fields: L3 only, L4 only or both, and
source only, destination only or both.
- Several RSS hash keys, one for each flow type.
- Default RSS operation with no hash key specification.
- Port and queue statistics.
- RMON statistics in extended stats.
- Link state information.
- Flow statistics
- Flow aging support
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
- Asynchronous flow support
- MTU update
Update: the pthread API was replaced with RTE spinlock in the separate patch.
Danylo Vodopianov (41):
net/ntnic: add NT flow dev configuration
net/ntnic: add item UDP
net/ntnic: add action TCP
net/ntnic: add action VLAN
net/ntnic: add item SCTP
net/ntnic: add items IPv6 and ICMPv6
net/ntnic: add action modify filed
net/ntnic: add items gtp and actions raw encap/decap
net/ntnic: add cat module
net/ntnic: add SLC LR module
net/ntnic: add PDB module
net/ntnic: add QSL module
net/ntnic: add KM module
net/ntnic: add hash API
net/ntnic: add TPE module
net/ntnic: add FLM module
net/ntnic: add FLM RCP module
net/ntnic: add learn flow queue handling
net/ntnic: match and action db attributes were added
net/ntnic: add statistics support
net/ntnic: add rpf module
net/ntnic: add statistics poll
net/ntnic: added flm stat interface
net/ntnic: add TSM module
net/ntnic: add xStats
net/ntnic: added flow statistics
net/ntnic: add scrub registers
net/ntnic: add high-level flow aging support
net/ntnic: add aging to the inline profile
net/ntnic: add flow info and flow configure support
net/ntnic: add flow aging event
net/ntnic: add termination thread
net/ntnic: add meter support
net/ntnic: add meter module
net/ntnic: add action update support
net/ntnic: add flow action update
net/ntnic: add flow actions update
net/ntnic: add async create/destroy declaration
net/ntnic: add async template declaration
net/ntnic: add async flow create/delete implementation
net/ntnic: add async template implementation
Oleksandr Kolomeiets (17):
net/ntnic: add flow dump feature
net/ntnic: add flow flush
net/ntnic: sort FPGA registers alphanumerically
net/ntnic: add CSU module registers
net/ntnic: add FLM module registers
net/ntnic: add HFU module registers
net/ntnic: add IFR module registers
net/ntnic: add MAC Rx module registers
net/ntnic: add MAC Tx module registers
net/ntnic: add RPP LR module registers
net/ntnic: add SLC LR module registers
net/ntnic: add Tx CPY module registers
net/ntnic: add Tx INS module registers
net/ntnic: add Tx RPL module registers
net/ntnic: add STA module
net/ntnic: add TSM module
net/ntnic: add MTU configuration
Serhii Iliushyk (22):
net/ntnic: add flow filter support
net/ntnic: add minimal create/destroy flow operations
net/ntnic: add internal functions for create/destroy
net/ntnic: add minimal NT flow inline profile
net/ntnic: add management functions for NT flow profile
net/ntnic: add NT flow profile management implementation
net/ntnic: add create/destroy implementation for NT flows
net/ntnic: add infrastructure for for flow actions and items
net/ntnic: add action queue
net/ntnic: add action mark
net/ntnic: add ation jump
net/ntnic: add action drop
net/ntnic: add item eth
net/ntnic: add item IPv4
net/ntnic: add item ICMP
net/ntnic: add item port ID
net/ntnic: add item void
net/ntnic: add GMF (Generic MAC Feeder) module
net/ntnic: update alignment for virt queue structs
net/ntnic: enable RSS feature
net/ntnic: migrate to the RTE spinlock
net/ntnic: remove unnecessary
doc/guides/nics/features/default.ini | 2 +-
doc/guides/nics/features/ntnic.ini | 34 +
doc/guides/nics/ntnic.rst | 50 +
doc/guides/rel_notes/release_24_11.rst | 8 +
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 598 ++
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 +-
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 73 +
drivers/net/ntnic/include/flow_api.h | 142 +-
drivers/net/ntnic/include/flow_api_engine.h | 380 +
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 256 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 5 +
drivers/net/ntnic/include/ntnic_stat.h | 265 +
drivers/net/ntnic/include/ntos_drv.h | 24 +
.../ntnic/include/stream_binary_flow_api.h | 67 +
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 +
drivers/net/ntnic/meson.build | 20 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +
.../net/ntnic/nthw/core/include/nthw_i2cm.h | 4 +-
.../net/ntnic/nthw/core/include/nthw_rmc.h | 6 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 49 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 +
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 30 +
drivers/net/ntnic/nthw/core/nthw_rpf.c | 120 +
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 923 ++-
drivers/net/ntnic/nthw/flow_api/flow_group.c | 99 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 +
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
.../net/ntnic/nthw/flow_api/flow_id_table.c | 145 +
.../net/ntnic/nthw/flow_api/flow_id_table.h | 26 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1171 ++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 457 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 723 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 853 +++
.../flow_api/profile_inline/flm_age_queue.c | 164 +
.../flow_api/profile_inline/flm_age_queue.h | 42 +
.../flow_api/profile_inline/flm_evt_queue.c | 293 +
.../flow_api/profile_inline/flm_evt_queue.h | 55 +
.../flow_api/profile_inline/flm_lrn_queue.c | 70 +
.../flow_api/profile_inline/flm_lrn_queue.h | 25 +
.../profile_inline/flow_api_hw_db_inline.c | 3000 ++++++++
.../profile_inline/flow_api_hw_db_inline.h | 394 ++
.../profile_inline/flow_api_profile_inline.c | 6082 +++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 132 +
.../flow_api_profile_inline_config.h | 127 +
.../ntnic/nthw/flow_filter/flow_nthw_flm.c | 47 +-
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 +
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
drivers/net/ntnic/nthw/nthw_rac.c | 38 +-
drivers/net/ntnic/nthw/nthw_rac.h | 2 +-
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 498 ++
.../supported/nthw_fpga_9563_055_049_0000.c | 3317 ++++++---
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 11 +-
.../nthw/supported/nthw_fpga_mod_str_map.c | 2 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 5 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 48 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 205 +
drivers/net/ntnic/ntnic_ethdev.c | 813 ++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 1348 ++++
drivers/net/ntnic/ntnic_mod_reg.c | 111 +
drivers/net/ntnic/ntnic_mod_reg.h | 331 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 +++
drivers/net/ntnic/ntutil/nt_util.h | 12 +
80 files changed, 25744 insertions(+), 1123 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 01/80] net/ntnic: add NT flow dev configuration
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 02/80] net/ntnic: add flow filter support Serhii Iliushyk
` (78 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
This API allows enabling of flow profile for NT SmartNIC.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
v5
* remove unnecessary SCATTER_GATHER condition
---
drivers/net/ntnic/include/flow_api.h | 30 +++
drivers/net/ntnic/include/flow_api_engine.h | 5 +
drivers/net/ntnic/include/ntos_drv.h | 1 +
.../ntnic/include/stream_binary_flow_api.h | 9 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 209 +++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 22 ++
drivers/net/ntnic/ntnic_mod_reg.c | 5 +
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++
8 files changed, 285 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 984450afdc..c80906ec50 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -34,6 +34,8 @@ struct flow_eth_dev {
struct flow_nic_dev *ndev;
/* NIC port id */
uint8_t port;
+ /* App assigned port_id - may be DPDK port_id */
+ uint32_t port_id;
/* 0th for exception */
struct flow_queue_id_s rx_queue[FLOW_MAX_QUEUES + 1];
@@ -41,6 +43,9 @@ struct flow_eth_dev {
/* VSWITCH has exceptions sent on queue 0 per design */
int num_queues;
+ /* QSL_HSH index if RSS needed QSL v6+ */
+ int rss_target_id;
+
struct flow_eth_dev *next;
};
@@ -48,6 +53,8 @@ struct flow_eth_dev {
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
uint16_t ports; /* number of in-ports addressable on this NIC */
+ /* flow profile this NIC is initially prepared for */
+ enum flow_eth_dev_profile flow_profile;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
@@ -73,6 +80,14 @@ struct flow_nic_dev {
extern const char *dbg_res_descr[];
+#define flow_nic_set_bit(arr, x) \
+ do { \
+ uint8_t *_temp_arr = (arr); \
+ size_t _temp_x = (x); \
+ _temp_arr[_temp_x / 8] = \
+ (uint8_t)(_temp_arr[_temp_x / 8] | (uint8_t)(1 << (_temp_x % 8))); \
+ } while (0)
+
#define flow_nic_unset_bit(arr, x) \
do { \
size_t _temp_x = (x); \
@@ -85,6 +100,18 @@ extern const char *dbg_res_descr[];
(arr[_temp_x / 8] & (uint8_t)(1 << (_temp_x % 8))); \
})
+#define flow_nic_mark_resource_used(_ndev, res_type, index) \
+ do { \
+ struct flow_nic_dev *_temp_ndev = (_ndev); \
+ typeof(res_type) _temp_res_type = (res_type); \
+ size_t _temp_index = (index); \
+ NT_LOG(DBG, FILTER, "mark resource used: %s idx %zu", \
+ dbg_res_descr[_temp_res_type], _temp_index); \
+ assert(flow_nic_is_bit_set(_temp_ndev->res[_temp_res_type].alloc_bm, \
+ _temp_index) == 0); \
+ flow_nic_set_bit(_temp_ndev->res[_temp_res_type].alloc_bm, _temp_index); \
+ } while (0)
+
#define flow_nic_mark_resource_unused(_ndev, res_type, index) \
do { \
typeof(res_type) _temp_res_type = (res_type); \
@@ -97,6 +124,9 @@ extern const char *dbg_res_descr[];
#define flow_nic_is_resource_used(_ndev, res_type, index) \
(!!flow_nic_is_bit_set((_ndev)->res[res_type].alloc_bm, index))
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment);
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index db5e6fe09d..d025677e25 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -41,6 +41,11 @@ enum res_type_e {
RES_INVALID
};
+/*
+ * Flow NIC offload management
+ */
+#define MAX_OUTPUT_DEST (128)
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index d51d1e3677..8fd577dfe3 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -86,6 +86,7 @@ struct __rte_cache_aligned ntnic_tx_queue {
struct pmd_internals {
const struct rte_pci_device *pci_dev;
+ struct flow_eth_dev *flw_dev;
char name[20];
int n_intf_no;
int lpbk_mode;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 10529b8843..47e5353344 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,11 +12,20 @@
#define FLOW_MAX_QUEUES 128
+/*
+ * Flow eth dev profile determines how the FPGA module resources are
+ * managed and what features are available
+ */
+enum flow_eth_dev_profile {
+ FLOW_ETH_DEV_PROFILE_INLINE = 0,
+};
+
struct flow_queue_id_s {
int id;
int hw_id;
};
struct flow_eth_dev; /* port device */
+struct flow_handle;
#endif /* _STREAM_BINARY_FLOW_API_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 34e84559eb..7716a9fc82 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_nic_setup.h"
#include "ntnic_mod_reg.h"
+#include "flow_api.h"
#include "flow_filter.h"
const char *dbg_res_descr[] = {
@@ -35,6 +36,24 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Resources
+ */
+
+int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ uint32_t alignment)
+{
+ for (unsigned int i = 0; i < ndev->res[res_type].resource_count; i += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, i)) {
+ flow_nic_mark_resource_used(ndev, res_type, i);
+ ndev->res[res_type].ref[i] = 1;
+ return i;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
@@ -55,10 +74,60 @@ int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return !!ndev->res[res_type].ref[index];/* if 0 resource has been freed */
}
+/*
+ * Nic port/adapter lookup
+ */
+
+static struct flow_eth_dev *nic_and_port_to_eth_dev(uint8_t adapter_no, uint8_t port)
+{
+ struct flow_nic_dev *nic_dev = dev_base;
+
+ while (nic_dev) {
+ if (nic_dev->adapter_no == adapter_no)
+ break;
+
+ nic_dev = nic_dev->next;
+ }
+
+ if (!nic_dev)
+ return NULL;
+
+ struct flow_eth_dev *dev = nic_dev->eth_base;
+
+ while (dev) {
+ if (port == dev->port)
+ return dev;
+
+ dev = dev->next;
+ }
+
+ return NULL;
+}
+
+static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
+{
+ struct flow_nic_dev *ndev = dev_base;
+
+ while (ndev) {
+ if (adapter_no == ndev->adapter_no)
+ break;
+
+ ndev = ndev->next;
+ }
+
+ return ndev;
+}
+
/*
* Device Management API
*/
+static void nic_insert_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *dev)
+{
+ dev->next = ndev->eth_base;
+ ndev->eth_base = dev;
+}
+
static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_dev *eth_dev)
{
struct flow_eth_dev *dev = ndev->eth_base, *prev = NULL;
@@ -156,16 +225,6 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
-#ifndef SCATTER_GATHER
-
- /* free rx queues */
- for (int i = 0; i < eth_dev->num_queues; i++) {
- ndev->be.iface->free_rx_queue(ndev->be.be_dev, eth_dev->rx_queue[i].hw_id);
- flow_nic_deref_resource(ndev, RES_QUEUE, eth_dev->rx_queue[i].id);
- }
-
-#endif
-
/* take eth_dev out of ndev list */
if (nic_remove_eth_port_dev(ndev, eth_dev) != 0)
NT_LOG(ERR, FILTER, "ERROR : eth_dev %p not found", eth_dev);
@@ -242,6 +301,132 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
return -1;
}
+/*
+ * adapter_no physical adapter no
+ * port_no local port no
+ * alloc_rx_queues number of rx-queues to allocate for this eth_dev
+ */
+static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no, uint32_t port_id,
+ int alloc_rx_queues, struct flow_queue_id_s queue_ids[],
+ int *rss_target_id, enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+
+ int i;
+ struct flow_eth_dev *eth_dev = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "Get eth-port adapter %i, port %i, port_id %u, rx queues %i, profile %i",
+ adapter_no, port_no, port_id, alloc_rx_queues, flow_profile);
+
+ if (MAX_OUTPUT_DEST < FLOW_MAX_QUEUES) {
+ assert(0);
+ NT_LOG(ERR, FILTER,
+ "ERROR: Internal array for multiple queues too small for API");
+ }
+
+ pthread_mutex_lock(&base_mtx);
+ struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
+
+ if (!ndev) {
+ /* Error - no flow api found on specified adapter */
+ NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
+ adapter_no);
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if (ndev->ports < ((uint16_t)port_no + 1)) {
+ NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
+ NT_LOG(ERR, FILTER,
+ "ERROR: Exceeds supported number of rx queues per eth device");
+ pthread_mutex_unlock(&base_mtx);
+ return NULL;
+ }
+
+ /* don't accept multiple eth_dev's on same NIC and same port */
+ eth_dev = nic_and_port_to_eth_dev(adapter_no, port_no);
+
+ if (eth_dev) {
+ NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
+ adapter_no, port_no);
+ pthread_mutex_unlock(&base_mtx);
+ flow_delete_eth_dev(eth_dev);
+ eth_dev = NULL;
+ }
+
+ eth_dev = calloc(1, sizeof(struct flow_eth_dev));
+
+ if (!eth_dev) {
+ NT_LOG(ERR, FILTER, "ERROR: calloc failed");
+ goto err_exit1;
+ }
+
+ pthread_mutex_lock(&ndev->mtx);
+
+ eth_dev->ndev = ndev;
+ eth_dev->port = port_no;
+ eth_dev->port_id = port_id;
+
+ /* Allocate the requested queues in HW for this dev */
+
+ for (i = 0; i < alloc_rx_queues; i++) {
+ eth_dev->rx_queue[i] = queue_ids[i];
+
+ if (i == 0 && (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE && exception_path)) {
+ /*
+ * Init QSL UNM - unmatched - redirects otherwise discarded
+ * packets in QSL
+ */
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_DEST_QUEUE, eth_dev->port,
+ eth_dev->rx_queue[0].hw_id) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1) < 0)
+ goto err_exit0;
+ }
+
+ eth_dev->num_queues++;
+ }
+
+ eth_dev->rss_target_id = -1;
+
+ *rss_target_id = eth_dev->rss_target_id;
+
+ nic_insert_eth_port_dev(ndev, eth_dev);
+
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+ return eth_dev;
+
+err_exit0:
+ pthread_mutex_unlock(&ndev->mtx);
+ pthread_mutex_unlock(&base_mtx);
+
+err_exit1:
+ if (eth_dev)
+ free(eth_dev);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ NT_LOG(DBG, FILTER, "ERR in %s", __func__);
+ return NULL; /* Error exit */
+}
+
struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_backend_ops *be_if,
void *be_dev)
{
@@ -383,6 +568,10 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
+ /*
+ * Device Management API
+ */
+ .flow_get_eth_dev = flow_get_eth_dev,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bff893ec7a..510c0e5d23 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1355,6 +1355,13 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1378,10 +1385,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
uint32_t n_port_mask = -1; /* All ports enabled by default */
uint32_t nb_rx_queues = 1;
uint32_t nb_tx_queues = 1;
+ uint32_t exception_path = 0;
struct flow_queue_id_s queue_ids[MAX_QUEUES];
int n_phy_ports;
struct port_link_speed pls_mbps[NUM_ADAPTER_PORTS_MAX] = { 0 };
int num_port_speeds = 0;
+ enum flow_eth_dev_profile profile = FLOW_ETH_DEV_PROFILE_INLINE;
+
NT_LOG_DBGX(DBG, NTNIC, "Dev %s PF #%i Init : %02x:%02x:%i", pci_dev->name,
pci_dev->addr.function, pci_dev->addr.bus, pci_dev->addr.devid,
pci_dev->addr.function);
@@ -1681,6 +1691,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (flow_filter_ops != NULL) {
+ internals->flw_dev = flow_filter_ops->flow_get_eth_dev(0, n_intf_no,
+ eth_dev->data->port_id, nb_rx_queues, queue_ids,
+ &internals->txq_scg[0].rss_target_id, profile, exception_path);
+
+ if (!internals->flw_dev) {
+ NT_LOG(ERR, NTNIC,
+ "Error creating port. Resource exhaustion in HW");
+ return -1;
+ }
+ }
+
/* connect structs */
internals->p_drv = p_drv;
eth_dev->data->dev_private = internals;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index a03c97801b..ac8afdef6a 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,6 +118,11 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+const struct profile_inline_ops *get_profile_inline_ops(void)
+{
+ return NULL;
+}
+
static const struct flow_filter_ops *flow_filter_ops;
void register_flow_filter_ops(const struct flow_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 5b97b3d8ac..017d15d7bc 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -8,6 +8,7 @@
#include <stdint.h>
#include "flow_api.h"
+#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
#include "nthw_platform_drv.h"
#include "nthw_drv.h"
@@ -223,10 +224,23 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+const struct profile_inline_ops *get_profile_inline_ops(void);
+
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
int adapter_no);
int (*flow_filter_done)(struct flow_nic_dev *dev);
+ /*
+ * Device Management API
+ */
+ struct flow_eth_dev *(*flow_get_eth_dev)(uint8_t adapter_no,
+ uint8_t hw_port_no,
+ uint32_t port_id,
+ int alloc_rx_queues,
+ struct flow_queue_id_s queue_ids[],
+ int *rss_target_id,
+ enum flow_eth_dev_profile flow_profile,
+ uint32_t exception_path);
};
void register_flow_filter_ops(const struct flow_filter_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 02/80] net/ntnic: add flow filter support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 01/80] net/ntnic: add NT flow dev configuration Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 03/80] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
` (77 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable flow ops getter.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 13 +++++++
.../ntnic/include/stream_binary_flow_api.h | 2 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 7 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 37 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +++
7 files changed, 80 insertions(+)
create mode 100644 drivers/net/ntnic/include/create_elements.h
create mode 100644 drivers/net/ntnic/ntnic_filter/ntnic_filter.c
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
new file mode 100644
index 0000000000..802e6dcbe1
--- /dev/null
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -0,0 +1,13 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __CREATE_ELEMENTS_H__
+#define __CREATE_ELEMENTS_H__
+
+
+#include "stream_binary_flow_api.h"
+#include <rte_flow.h>
+
+#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 47e5353344..a6244d4082 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,8 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include "rte_flow.h"
+#include "rte_flow_driver.h"
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 3d9566a52e..d272c73c62 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -79,6 +79,7 @@ sources = files(
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
'ntlog/ntlog.c',
+ 'ntnic_filter/ntnic_filter.c',
'ntutil/nt_util.c',
'ntnic_mod_reg.c',
'ntnic_vfio.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 510c0e5d23..a509a8eb51 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1321,6 +1321,12 @@ eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size
}
}
+static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_ops **ops)
+{
+ *ops = get_dev_flow_ops();
+ return 0;
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1349,6 +1355,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
};
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
new file mode 100644
index 0000000000..445139abc9
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -0,0 +1,37 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_flow_driver.h>
+#include "ntnic_mod_reg.h"
+
+static int
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ int res = 0;
+
+ return res;
+}
+
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ const struct rte_flow_item items[] __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct rte_flow *flow = NULL;
+
+ return flow;
+}
+
+static const struct rte_flow_ops dev_flow_ops = {
+ .create = eth_flow_create,
+ .destroy = eth_flow_destroy,
+};
+
+void dev_flow_init(void)
+{
+ register_dev_flow_ops(&dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ac8afdef6a..ad2266116f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -137,3 +137,18 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+
+static const struct rte_flow_ops *dev_flow_ops;
+
+void register_dev_flow_ops(const struct rte_flow_ops *ops)
+{
+ dev_flow_ops = ops;
+}
+
+const struct rte_flow_ops *get_dev_flow_ops(void)
+{
+ if (dev_flow_ops == NULL)
+ dev_flow_init();
+
+ return dev_flow_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 017d15d7bc..457dc58794 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -15,6 +15,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nthw_fpga_rst_nt200a0x.h"
#include "ntnic_virt_queue.h"
+#include "create_elements.h"
/* sg ops section */
struct sg_ops_s {
@@ -243,6 +244,10 @@ struct flow_filter_ops {
uint32_t exception_path);
};
+void register_dev_flow_ops(const struct rte_flow_ops *ops);
+const struct rte_flow_ops *get_dev_flow_ops(void);
+void dev_flow_init(void);
+
void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 03/80] net/ntnic: add minimal create/destroy flow operations
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 01/80] net/ntnic: add NT flow dev configuration Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 02/80] net/ntnic: add flow filter support Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 04/80] net/ntnic: add internal functions for create/destroy Serhii Iliushyk
` (76 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add high-level API flow create/destroy implementation
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/create_elements.h | 51 ++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 227 +++++++++++++++++-
drivers/net/ntnic/ntutil/nt_util.h | 3 +
3 files changed, 274 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 802e6dcbe1..179542d2b2 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -6,8 +6,59 @@
#ifndef __CREATE_ELEMENTS_H__
#define __CREATE_ELEMENTS_H__
+#include "stdint.h"
#include "stream_binary_flow_api.h"
#include <rte_flow.h>
+#define MAX_ELEMENTS 64
+#define MAX_ACTIONS 32
+
+struct cnv_match_s {
+ struct rte_flow_item rte_flow_item[MAX_ELEMENTS];
+};
+
+struct cnv_attr_s {
+ struct cnv_match_s match;
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
+};
+
+struct cnv_action_s {
+ struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_queue queue;
+};
+
+/*
+ * Only needed because it eases the use of statistics through NTAPI
+ * for faster integration into NTAPI version of driver
+ * Therefore, this is only a good idea when running on a temporary NTAPI
+ * The query() functionality must go to flow engine, when moved to Open Source driver
+ */
+
+struct rte_flow {
+ void *flw_hdl;
+ int used;
+
+ uint32_t flow_stat_id;
+
+ uint16_t caller_id;
+};
+
+enum nt_rte_flow_item_type {
+ NT_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
+ NT_RTE_FLOW_ITEM_TYPE_TUNNEL,
+};
+
+extern rte_spinlock_t flow_lock;
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem);
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset);
+
#endif /* __CREATE_ELEMENTS_H__ */
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 445139abc9..74cf360da0 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,24 +4,237 @@
*/
#include <rte_flow_driver.h>
+#include "nt_util.h"
+#include "create_elements.h"
#include "ntnic_mod_reg.h"
+#include "ntos_system.h"
+
+#define MAX_RTE_FLOWS 8192
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
+int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
+{
+ if (error) {
+ error->cause = NULL;
+ error->message = rte_flow_error->message;
+
+ if (rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE ||
+ rte_flow_error->type == RTE_FLOW_ERROR_TYPE_NONE)
+ error->type = RTE_FLOW_ERROR_TYPE_NONE;
+
+ else
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+
+ return 0;
+}
+
+int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr)
+{
+ memset(&attribute->attr, 0x0, sizeof(struct rte_flow_attr));
+
+ if (attr) {
+ attribute->attr.group = attr->group;
+ attribute->attr.priority = attr->priority;
+ }
+
+ return 0;
+}
+
+int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
+ int max_elem)
+{
+ int eidx = 0;
+ int iter_idx = 0;
+ int type = -1;
+
+ if (!items) {
+ NT_LOG(ERR, FILTER, "ERROR no items to iterate!");
+ return -1;
+ }
+
+ do {
+ type = items[iter_idx].type;
+
+ if (type < 0) {
+ if ((int)items[iter_idx].type == NT_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ type = NT_RTE_FLOW_ITEM_TYPE_TUNNEL;
+
+ } else {
+ NT_LOG(ERR, FILTER, "ERROR unknown item type received!");
+ return -1;
+ }
+ }
+
+ if (type >= 0) {
+ if (items[iter_idx].last) {
+ /* Ranges are not supported yet */
+ NT_LOG(ERR, FILTER, "ERROR ITEM-RANGE SETUP - NOT SUPPORTED!");
+ return -1;
+ }
+
+ if (eidx == max_elem) {
+ NT_LOG(ERR, FILTER, "ERROR TOO MANY ELEMENTS ENCOUNTERED!");
+ return -1;
+ }
+
+ match->rte_flow_item[eidx].type = type;
+ match->rte_flow_item[eidx].spec = items[iter_idx].spec;
+ match->rte_flow_item[eidx].mask = items[iter_idx].mask;
+
+ eidx++;
+ iter_idx++;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
+ return (type >= 0) ? 0 : -1;
+}
+
+int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
+ const struct rte_flow_action actions[] __rte_unused,
+ int max_elem __rte_unused,
+ uint32_t queue_offset __rte_unused)
+{
+ int type = -1;
+
+ return (type >= 0) ? 0 : -1;
+}
+
+static inline uint16_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + port + 1;
+}
+
+static int convert_flow(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct cnv_attr_s *attribute,
+ struct cnv_match_s *match,
+ struct cnv_action_s *action,
+ struct rte_flow_error *error)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t queue_offset = 0;
+
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!internals) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Missing eth_dev");
+ return -1;
+ }
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = internals->vpq[0].id;
+ }
+
+ if (create_attr(attribute, attr) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, "Error in attr");
+ return -1;
+ }
+
+ if (create_match_elements(match, items, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in items");
+ return -1;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ if (create_action_elements_inline(action, actions,
+ MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return -1;
+ }
+
+ return 0;
+}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow __rte_unused,
- struct rte_flow_error *error __rte_unused)
+eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
+ /* Set initial error */
+ convert_error(error, &flow_error);
+
+ if (!flow)
+ return 0;
return res;
}
-static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_attr *attr __rte_unused,
- const struct rte_flow_item items[] __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- struct rte_flow_error *error __rte_unused)
+static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+
+ struct cnv_attr_s attribute = { 0 };
+ struct cnv_match_s match = { 0 };
+ struct cnv_action_s action = { 0 };
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ uint32_t flow_stat_id = 0;
+
+ if (convert_flow(eth_dev, attr, items, actions, &attribute, &match, &action, error) < 0)
+ return NULL;
+
+ /* Main application caller_id is port_id shifted above VF ports */
+ attribute.caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ convert_error(error, &flow_error);
+ return (struct rte_flow *)NULL;
+ }
+
struct rte_flow *flow = NULL;
+ rte_spinlock_lock(&flow_lock);
+ int i;
+
+ for (i = 0; i < MAX_RTE_FLOWS; i++) {
+ if (!nt_flows[i].used) {
+ nt_flows[i].flow_stat_id = flow_stat_id;
+
+ if (nt_flows[i].flow_stat_id < NT_MAX_COLOR_FLOW_STATS) {
+ nt_flows[i].used = 1;
+ flow = &nt_flows[i];
+ }
+
+ break;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
return flow;
}
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 64947f5fbf..71ecd6c68c 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -9,6 +9,9 @@
#include <stdint.h>
#include "nt4ga_link.h"
+/* Total max VDPA ports */
+#define MAX_VDPA_PORTS 128UL
+
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 04/80] net/ntnic: add internal functions for create/destroy
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (2 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 03/80] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 05/80] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
` (75 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
NT-specific functions for creating/destroying a flow
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 ++++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 14 ++++
3 files changed, 116 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 7716a9fc82..acfcad2064 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -117,6 +117,40 @@ static struct flow_nic_dev *get_nic_dev_from_adapter_no(uint8_t adapter_no)
return ndev;
}
+/*
+ * Flow API
+ */
+
+static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ uint16_t forced_vlan_vid __rte_unused,
+ uint16_t caller_id __rte_unused,
+ const struct rte_flow_item item[] __rte_unused,
+ const struct rte_flow_action action[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return NULL;
+ }
+
+ return NULL;
+}
+
+static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
+ struct flow_handle *flow __rte_unused, struct rte_flow_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return -1;
+}
/*
* Device Management API
@@ -572,6 +606,11 @@ static const struct flow_filter_ops ops = {
* Device Management API
*/
.flow_get_eth_dev = flow_get_eth_dev,
+ /*
+ * NT Flow API
+ */
+ .flow_create = flow_create,
+ .flow_destroy = flow_destroy,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 74cf360da0..b9d723c9dd 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -110,6 +110,13 @@ static inline uint16_t get_caller_id(uint16_t port)
return MAX_VDPA_PORTS + port + 1;
}
+static int is_flow_handle_typecast(struct rte_flow *flow)
+{
+ const void *first_element = &nt_flows[0];
+ const void *last_element = &nt_flows[MAX_RTE_FLOWS - 1];
+ return (void *)flow < first_element || (void *)flow > last_element;
+}
+
static int convert_flow(struct rte_eth_dev *eth_dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -173,9 +180,17 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
}
static int
-eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow,
- struct rte_flow_error *error)
+eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
int res = 0;
@@ -185,6 +200,20 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev __rte_unused, struct rte_flow *flow
if (!flow)
return 0;
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, (void *)flow, &flow_error);
+ convert_error(error, &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_destroy(internals->flw_dev, flow->flw_hdl,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ rte_spinlock_unlock(&flow_lock);
+ }
+
return res;
}
@@ -194,6 +223,13 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -213,8 +249,12 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
attribute.caller_id = get_caller_id(eth_dev->data->port_id);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE && attribute.attr.group > 0) {
+ void *flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
convert_error(error, &flow_error);
- return (struct rte_flow *)NULL;
+ return (struct rte_flow *)flw_hdl;
}
struct rte_flow *flow = NULL;
@@ -236,6 +276,26 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
rte_spinlock_unlock(&flow_lock);
+ if (flow) {
+ flow->flw_hdl = flow_filter_ops->flow_create(internals->flw_dev, &attribute.attr,
+ attribute.forced_vlan_vid, attribute.caller_id,
+ match.rte_flow_item, action.flow_actions,
+ &flow_error);
+ convert_error(error, &flow_error);
+
+ if (!flow->flw_hdl) {
+ rte_spinlock_lock(&flow_lock);
+ flow->used = 0;
+ flow = NULL;
+ rte_spinlock_unlock(&flow_lock);
+
+ } else {
+ rte_spinlock_lock(&flow_lock);
+ flow->caller_id = attribute.caller_id;
+ rte_spinlock_unlock(&flow_lock);
+ }
+ }
+
return flow;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 457dc58794..ec8c1612d1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -242,6 +242,20 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ /*
+ * NT Flow API
+ */
+ struct flow_handle *(*flow_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item item[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 05/80] net/ntnic: add minimal NT flow inline profile
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (3 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 04/80] net/ntnic: add internal functions for create/destroy Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 06/80] net/ntnic: add management functions for NT flow profile Serhii Iliushyk
` (74 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The flow profile implements all flow-related operations
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 15 +++++
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 28 +++++++-
.../profile_inline/flow_api_profile_inline.c | 65 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 33 ++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 12 +++-
drivers/net/ntnic/ntnic_mod_reg.h | 23 +++++++
7 files changed, 174 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index c80906ec50..3bdfdd4f94 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -74,6 +74,21 @@ struct flow_nic_dev {
struct flow_nic_dev *next;
};
+enum flow_nic_err_msg_e {
+ ERR_SUCCESS = 0,
+ ERR_FAILED = 1,
+ ERR_OUTPUT_TOO_MANY = 3,
+ ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_ACTION_UNSUPPORTED = 28,
+ ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_OUTPUT_INVALID = 33,
+ ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_MSG_NO_MSG
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error);
+
/*
* Resources
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d272c73c62..f5605e81cb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index acfcad2064..5c5bd147d1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -36,6 +36,29 @@ const char *dbg_res_descr[] = {
static struct flow_nic_dev *dev_base;
static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+/*
+ * Error handling
+ */
+
+static const struct {
+ const char *message;
+} err_msg[] = {
+ /* 00 */ { "Operation successfully completed" },
+ /* 01 */ { "Operation failed" },
+ /* 29 */ { "Removing flow failed" },
+};
+
+void flow_nic_set_error(enum flow_nic_err_msg_e msg, struct rte_flow_error *error)
+{
+ assert(msg < ERR_MSG_NO_MSG);
+
+ if (error) {
+ error->message = err_msg[msg].message;
+ error->type = (msg == ERR_SUCCESS) ? RTE_FLOW_ERROR_TYPE_NONE :
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ }
+}
+
/*
* Resources
*/
@@ -136,7 +159,8 @@ static struct flow_handle *flow_create(struct flow_eth_dev *dev __rte_unused,
return NULL;
}
- return NULL;
+ return profile_inline_ops->flow_create_profile_inline(dev, attr,
+ forced_vlan_vid, caller_id, item, action, error);
}
static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
@@ -149,7 +173,7 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return -1;
}
- return -1;
+ return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
new file mode 100644
index 0000000000..34e01c5839
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -0,0 +1,65 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "flow_api_profile_inline.h"
+#include "ntnic_mod_reg.h"
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev __rte_unused,
+ const struct rte_flow_attr *attr __rte_unused,
+ uint16_t forced_vlan_vid __rte_unused,
+ uint16_t caller_id __rte_unused,
+ const struct rte_flow_item elem[] __rte_unused,
+ const struct rte_flow_action action[] __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ return NULL;
+}
+
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(fh);
+
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ return err;
+}
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *flow,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow) {
+ /* Delete this flow */
+ pthread_mutex_lock(&dev->ndev->mtx);
+ err = flow_destroy_locked_profile_inline(dev, flow, error);
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ }
+
+ return err;
+}
+
+static const struct profile_inline_ops ops = {
+ /*
+ * Flow functionality
+ */
+ .flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
+ .flow_create_profile_inline = flow_create_profile_inline,
+ .flow_destroy_profile_inline = flow_destroy_profile_inline,
+};
+
+void profile_inline_init(void)
+{
+ register_profile_inline_ops(&ops);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
new file mode 100644
index 0000000000..a83cc299b4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_H_
+#define _FLOW_API_PROFILE_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+#include "stream_binary_flow_api.h"
+
+/*
+ * Flow functionality
+ */
+int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+int flow_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+
+#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index ad2266116f..593b56bf5b 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -118,9 +118,19 @@ const struct flow_backend_ops *get_flow_backend_ops(void)
return flow_backend_ops;
}
+static const struct profile_inline_ops *profile_inline_ops;
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops)
+{
+ profile_inline_ops = ops;
+}
+
const struct profile_inline_ops *get_profile_inline_ops(void)
{
- return NULL;
+ if (profile_inline_ops == NULL)
+ profile_inline_init();
+
+ return profile_inline_ops;
}
static const struct flow_filter_ops *flow_filter_ops;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index ec8c1612d1..d133336fad 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -225,7 +225,30 @@ void register_flow_backend_ops(const struct flow_backend_ops *ops);
const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
+struct profile_inline_ops {
+ /*
+ * Flow functionality
+ */
+ int (*flow_destroy_locked_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *fh,
+ struct rte_flow_error *error);
+
+ struct flow_handle *(*flow_create_profile_inline)(struct flow_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ const struct rte_flow_item elem[],
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
+ int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ struct rte_flow_error *error);
+};
+
+void register_profile_inline_ops(const struct profile_inline_ops *ops);
const struct profile_inline_ops *get_profile_inline_ops(void);
+void profile_inline_init(void);
struct flow_filter_ops {
int (*flow_filter_init)(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 06/80] net/ntnic: add management functions for NT flow profile
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (4 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 05/80] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 07/80] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
` (73 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Management functions implements (re)setting of the NT flow dev.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v5
* Remove unnecessaty SCATTER_GATHER definition
---
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 ++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 58 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 20 +++++++
.../profile_inline/flow_api_profile_inline.h | 8 +++
drivers/net/ntnic/ntnic_mod_reg.h | 8 +++
6 files changed, 100 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 3bdfdd4f94..790b2f6b03 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -55,6 +55,7 @@ struct flow_nic_dev {
uint16_t ports; /* number of in-ports addressable on this NIC */
/* flow profile this NIC is initially prepared for */
enum flow_eth_dev_profile flow_profile;
+ int flow_mgnt_prepared;
struct hw_mod_resource_s res[RES_COUNT];/* raw NIC resource allocation table */
void *km_res_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index d025677e25..52ff3cb865 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -46,6 +46,11 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+struct flow_handle {
+ struct flow_eth_dev *dev;
+ struct flow_handle *next;
+};
+
void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 5c5bd147d1..a9016238d0 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -210,10 +210,29 @@ static int nic_remove_eth_port_dev(struct flow_nic_dev *ndev, struct flow_eth_de
static void flow_ndev_reset(struct flow_nic_dev *ndev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return;
+ }
+
/* Delete all eth-port devices created on this NIC device */
while (ndev->eth_base)
flow_delete_eth_dev(ndev->eth_base);
+ /* Error check */
+ while (ndev->flow_base) {
+ NT_LOG(ERR, FILTER,
+ "ERROR : Flows still defined but all eth-ports deleted. Flow %p",
+ ndev->flow_base);
+
+ profile_inline_ops->flow_destroy_profile_inline(ndev->flow_base->dev,
+ ndev->flow_base, NULL);
+ }
+
+ profile_inline_ops->done_flow_management_of_ndev_profile_inline(ndev);
+
km_free_ndev_resource_management(&ndev->km_res_handle);
kcc_free_ndev_resource_management(&ndev->kcc_res_handle);
@@ -255,6 +274,13 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
struct flow_nic_dev *ndev = eth_dev->ndev;
if (!ndev) {
@@ -271,6 +297,20 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
/* delete all created flows from this device */
pthread_mutex_lock(&ndev->mtx);
+ struct flow_handle *flow = ndev->flow_base;
+
+ while (flow) {
+ if (flow->dev == eth_dev) {
+ struct flow_handle *flow_next = flow->next;
+ profile_inline_ops->flow_destroy_locked_profile_inline(eth_dev, flow,
+ NULL);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
/*
* remove unmatched queue if setup in QSL
* remove exception queue setting in QSL UNM
@@ -435,6 +475,24 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->port = port_no;
eth_dev->port_id = port_id;
+ /* First time then NIC is initialized */
+ if (!ndev->flow_mgnt_prepared) {
+ ndev->flow_profile = flow_profile;
+
+ /* Initialize modules if needed - recipe 0 is used as no-match and must be setup */
+ if (profile_inline_ops != NULL &&
+ profile_inline_ops->initialize_flow_management_of_ndev_profile_inline(ndev))
+ goto err_exit0;
+
+ } else {
+ /* check if same flow type is requested, otherwise fail */
+ if (ndev->flow_profile != flow_profile) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: Different flow types requested on same NIC device. Not supported.");
+ goto err_exit0;
+ }
+ }
+
/* Allocate the requested queues in HW for this dev */
for (i = 0; i < alloc_rx_queues; i++) {
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 34e01c5839..0400527197 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,20 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+/*
+ * Public functions
+ */
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev __rte_unused)
+{
+ return -1;
+}
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev __rte_unused)
+{
+ return 0;
+}
+
struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev __rte_unused,
const struct rte_flow_attr *attr __rte_unused,
uint16_t forced_vlan_vid __rte_unused,
@@ -51,6 +65,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
}
static const struct profile_inline_ops ops = {
+ /*
+ * Management
+ */
+ .done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
+ .initialize_flow_management_of_ndev_profile_inline =
+ initialize_flow_management_of_ndev_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index a83cc299b4..b87f8542ac 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,14 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+/*
+ * Management
+ */
+
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index d133336fad..149c549112 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -226,6 +226,14 @@ const struct flow_backend_ops *get_flow_backend_ops(void);
void flow_backend_init(void);
struct profile_inline_ops {
+ /*
+ * Management
+ */
+
+ int (*done_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
+ int (*initialize_flow_management_of_ndev_profile_inline)(struct flow_nic_dev *ndev);
+
/*
* Flow functionality
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 07/80] net/ntnic: add NT flow profile management implementation
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (5 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 06/80] net/ntnic: add management functions for NT flow profile Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 08/80] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
` (72 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements functions required for (re)set NT flow dev
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 4 ++
drivers/net/ntnic/include/flow_api_engine.h | 10 ++++
drivers/net/ntnic/meson.build | 4 ++
drivers/net/ntnic/nthw/flow_api/flow_group.c | 55 +++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 52 ++++++++++++++++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 19 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 59 +++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 23 ++++++++
.../profile_inline/flow_api_profile_inline.c | 56 +++++++++++++++++-
9 files changed, 280 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_group.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_id_table.h
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 790b2f6b03..748da89262 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -61,6 +61,10 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *group_handle;
+ void *hw_db_handle;
+ void *id_table_handle;
+
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 52ff3cb865..2497c31a08 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -6,6 +6,8 @@
#ifndef _FLOW_API_ENGINE_H_
#define _FLOW_API_ENGINE_H_
+#include <stdint.h>
+
/*
* Resource management
*/
@@ -46,6 +48,9 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_CPY_WRITERS_SUPPORTED 8
+
+
struct flow_handle {
struct flow_eth_dev *dev;
struct flow_handle *next;
@@ -55,4 +60,9 @@ void km_free_ndev_resource_management(void **handle);
void kcc_free_ndev_resource_management(void **handle);
+/*
+ * Group management
+ */
+int flow_group_handle_create(void **handle, uint32_t group_count);
+int flow_group_handle_destroy(void **handle);
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f5605e81cb..f7292144ac 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -18,6 +18,7 @@ includes = [
include_directories('nthw/supported'),
include_directories('nthw/model'),
include_directories('nthw/flow_filter'),
+ include_directories('nthw/flow_api'),
include_directories('nim/'),
]
@@ -47,7 +48,10 @@ sources = files(
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
'nthw/flow_api/flow_api.c',
+ 'nthw/flow_api/flow_group.c',
+ 'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
+ 'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
'nthw/flow_api/flow_kcc.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
new file mode 100644
index 0000000000..a7371f3aad
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "flow_api_engine.h"
+
+#define OWNER_ID_COUNT 256
+#define PORT_COUNT 8
+
+struct group_lookup_entry_s {
+ uint64_t ref_counter;
+ uint32_t *reverse_lookup;
+};
+
+struct group_handle_s {
+ uint32_t group_count;
+
+ uint32_t *translation_table;
+
+ struct group_lookup_entry_s *lookup_entries;
+};
+
+int flow_group_handle_create(void **handle, uint32_t group_count)
+{
+ struct group_handle_s *group_handle;
+
+ *handle = calloc(1, sizeof(struct group_handle_s));
+ group_handle = *handle;
+
+ group_handle->group_count = group_count;
+ group_handle->translation_table =
+ calloc((uint32_t)(group_count * PORT_COUNT * OWNER_ID_COUNT), sizeof(uint32_t));
+ group_handle->lookup_entries = calloc(group_count, sizeof(struct group_lookup_entry_s));
+
+ return *handle != NULL ? 0 : -1;
+}
+
+int flow_group_handle_destroy(void **handle)
+{
+ if (*handle) {
+ struct group_handle_s *group_handle = (struct group_handle_s *)*handle;
+
+ free(group_handle->translation_table);
+ free(group_handle->lookup_entries);
+
+ free(*handle);
+ *handle = NULL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
new file mode 100644
index 0000000000..9b46848e59
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "flow_id_table.h"
+
+#define NTNIC_ARRAY_BITS 14
+#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+
+struct ntnic_id_table_element {
+ union flm_handles handle;
+ uint8_t caller_id;
+ uint8_t type;
+};
+
+struct ntnic_id_table_data {
+ struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
+ pthread_mutex_t mtx;
+
+ uint32_t next_id;
+
+ uint32_t free_head;
+ uint32_t free_tail;
+ uint32_t free_count;
+};
+
+void *ntnic_id_table_create(void)
+{
+ struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
+
+ pthread_mutex_init(&handle->mtx, NULL);
+ handle->next_id = 1;
+
+ return handle;
+}
+
+void ntnic_id_table_destroy(void *id_table)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
+ free(handle->arrays[i]);
+
+ pthread_mutex_destroy(&handle->mtx);
+
+ free(id_table);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
new file mode 100644
index 0000000000..13455f1165
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLOW_ID_TABLE_H_
+#define _FLOW_ID_TABLE_H_
+
+#include <stdint.h>
+
+union flm_handles {
+ uint64_t idx;
+ void *p;
+};
+
+void *ntnic_id_table_create(void);
+void ntnic_id_table_destroy(void *id_table);
+
+#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
new file mode 100644
index 0000000000..5fda11183c
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+
+#include "flow_api_hw_db_inline.h"
+
+/******************************************************************************/
+/* Handle */
+/******************************************************************************/
+
+struct hw_db_inline_resource_db {
+ /* Actions */
+ struct hw_db_inline_resource_db_cot {
+ struct hw_db_inline_cot_data data;
+ int ref;
+ } *cot;
+
+ uint32_t nb_cot;
+
+ /* Hardware */
+
+ struct hw_db_inline_resource_db_cfn {
+ uint64_t priority;
+ int cfn_hw;
+ int ref;
+ } *cfn;
+};
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
+{
+ /* Note: calloc is required for functionality in the hw_db_inline_destroy() */
+ struct hw_db_inline_resource_db *db = calloc(1, sizeof(struct hw_db_inline_resource_db));
+
+ if (db == NULL)
+ return -1;
+
+ db->nb_cot = ndev->be.cat.nb_cat_funcs;
+ db->cot = calloc(db->nb_cot, sizeof(struct hw_db_inline_resource_db_cot));
+
+ if (db->cot == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ *db_handle = db;
+ return 0;
+}
+
+void hw_db_inline_destroy(void *db_handle)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ free(db->cot);
+
+ free(db->cfn);
+
+ free(db);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
new file mode 100644
index 0000000000..23caf73cf3
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_HW_DB_INLINE_H_
+#define _FLOW_API_HW_DB_INLINE_H_
+
+#include <stdint.h>
+
+#include "flow_api.h"
+
+struct hw_db_inline_cot_data {
+ uint32_t matcher_color_contrib : 4;
+ uint32_t frag_rcp : 4;
+ uint32_t padding : 24;
+};
+
+/**/
+
+int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
+void hw_db_inline_destroy(void *db_handle);
+
+#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0400527197..6d91678c56 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,6 +4,9 @@
*/
#include "ntlog.h"
+#include "flow_api_engine.h"
+#include "flow_api_hw_db_inline.h"
+#include "flow_id_table.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
@@ -12,13 +15,62 @@
* Public functions
*/
-int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev __rte_unused)
+int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+ if (!ndev->flow_mgnt_prepared) {
+ /* Check static arrays are big enough */
+ assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+
+ ndev->id_table_handle = ntnic_id_table_create();
+
+ if (ndev->id_table_handle == NULL)
+ goto err_exit0;
+
+ if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
+ goto err_exit0;
+
+ if (hw_db_inline_create(ndev, &ndev->hw_db_handle))
+ goto err_exit0;
+
+ ndev->flow_mgnt_prepared = 1;
+ }
+
+ return 0;
+
+err_exit0:
+ done_flow_management_of_ndev_profile_inline(ndev);
return -1;
}
-int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev __rte_unused)
+int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
{
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ if (ndev->flow_mgnt_prepared) {
+ flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+
+ flow_group_handle_destroy(&ndev->group_handle);
+ ntnic_id_table_destroy(ndev->id_table_handle);
+
+ flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+
+ hw_mod_tpe_reset(&ndev->be);
+ flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
+ flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+
+ hw_db_inline_destroy(ndev->hw_db_handle);
+
+#ifdef FLOW_DEBUG
+ ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
+ ndev->flow_mgnt_prepared = 0;
+ }
+
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 08/80] net/ntnic: add create/destroy implementation for NT flows
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (6 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 07/80] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 09/80] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
` (71 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Implements flow create/destroy functions with minimal capabilities
item any
action port id
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 6 +
doc/guides/nics/ntnic.rst | 2 +
doc/guides/rel_notes/release_24_11.rst | 1 +
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 105 +++
.../ntnic/include/stream_binary_flow_api.h | 4 +
drivers/net/ntnic/meson.build | 2 +
drivers/net/ntnic/nthw/flow_api/flow_group.c | 44 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 79 +++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 4 +
.../flow_api/profile_inline/flm_lrn_queue.c | 28 +
.../flow_api/profile_inline/flm_lrn_queue.h | 14 +
.../profile_inline/flow_api_hw_db_inline.c | 93 +++
.../profile_inline/flow_api_hw_db_inline.h | 64 ++
.../profile_inline/flow_api_profile_inline.c | 657 ++++++++++++++++++
15 files changed, 1106 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b9b87bdfe..1c653fd5a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,3 +12,9 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
Linux = Y
x86-64 = Y
+
+[rte_flow items]
+any = Y
+
+[rte_flow actions]
+port_id = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 2c160ae592..a6568cba4e 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -40,6 +40,8 @@ Features
- Unicast MAC filter
- Multicast MAC filter
- Promiscuous mode (Enable only. The device always run promiscuous mode)
+- Flow API support.
+- Support for multiple rte_flow groups.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 15b64a1829..a235ce59d1 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -161,6 +161,7 @@ New Features
* Added NT flow backend initialization.
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
+ * Added flow handling support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 748da89262..667dad6d5f 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -68,6 +68,9 @@ struct flow_nic_dev {
uint32_t flow_unique_id_counter;
/* linked list of all flows created on this NIC */
struct flow_handle *flow_base;
+ /* linked list of all FLM flows created on this NIC */
+ struct flow_handle *flow_base_flm;
+ pthread_mutex_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 2497c31a08..b8da5eafba 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -7,6 +7,10 @@
#define _FLOW_API_ENGINE_H_
#include <stdint.h>
+#include <stdatomic.h>
+
+#include "hw_mod_backend.h"
+#include "stream_binary_flow_api.h"
/*
* Resource management
@@ -50,10 +54,107 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+enum flow_port_type_e {
+ PORT_NONE, /* not defined or drop */
+ PORT_INTERNAL, /* no queues attached */
+ PORT_PHY, /* MAC phy output queue */
+ PORT_VIRT, /* Memory queues to Host */
+};
+
+struct output_s {
+ uint32_t owning_port_id;/* the port who owns this output destination */
+ enum flow_port_type_e type;
+ int id; /* depending on port type: queue ID or physical port id or not used */
+ int active; /* activated */
+};
+
+struct nic_flow_def {
+ /*
+ * Frame Decoder match info collected
+ */
+ int l2_prot;
+ int l3_prot;
+ int l4_prot;
+ int tunnel_prot;
+ int tunnel_l3_prot;
+ int tunnel_l4_prot;
+ int vlans;
+ int fragmentation;
+ int ip_prot;
+ int tunnel_ip_prot;
+ /*
+ * Additional meta data for various functions
+ */
+ int in_port_override;
+ int non_empty; /* default value is -1; value 1 means flow actions update */
+ struct output_s dst_id[MAX_OUTPUT_DEST];/* define the output to use */
+ /* total number of available queues defined for all outputs - i.e. number of dst_id's */
+ int dst_num_avail;
+
+ /*
+ * Mark or Action info collection
+ */
+ uint32_t mark;
+
+ uint32_t jump_to_group;
+
+ int full_offload;
+};
+
+enum flow_handle_type {
+ FLOW_HANDLE_TYPE_FLOW,
+ FLOW_HANDLE_TYPE_FLM,
+};
struct flow_handle {
+ enum flow_handle_type type;
+ uint32_t flm_id;
+ uint16_t caller_id;
+ uint16_t learn_ignored;
+
struct flow_eth_dev *dev;
struct flow_handle *next;
+ struct flow_handle *prev;
+
+ void *user_data;
+
+ union {
+ struct {
+ /*
+ * 1st step conversion and validation of flow
+ * verified and converted flow match + actions structure
+ */
+ struct nic_flow_def *fd;
+ /*
+ * 2nd step NIC HW resource allocation and configuration
+ * NIC resource management structures
+ */
+ struct {
+ uint32_t db_idx_counter;
+ uint32_t db_idxs[RES_COUNT];
+ };
+ uint32_t port_id; /* MAC port ID or override of virtual in_port */
+ };
+
+ struct {
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_data[10];
+ uint8_t flm_prot;
+ uint8_t flm_kid;
+ uint8_t flm_prio;
+ uint8_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint32_t flm_nat_ipv4;
+ uint16_t flm_nat_port;
+ uint8_t flm_dscp;
+ uint32_t flm_teid;
+ uint8_t flm_rqi;
+ uint8_t flm_qfi;
+ };
+ };
};
void km_free_ndev_resource_management(void **handle);
@@ -65,4 +166,8 @@ void kcc_free_ndev_resource_management(void **handle);
*/
int flow_group_handle_create(void **handle, uint32_t group_count);
int flow_group_handle_destroy(void **handle);
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out);
+
#endif /* _FLOW_API_ENGINE_H_ */
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index a6244d4082..d878b848c2 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -8,6 +8,10 @@
#include "rte_flow.h"
#include "rte_flow_driver.h"
+
+/* Max RSS hash key length in bytes */
+#define MAX_RSS_KEY_LEN 40
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index f7292144ac..e1fef37ccb 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -50,6 +50,8 @@ sources = files(
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
+ 'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_group.c b/drivers/net/ntnic/nthw/flow_api/flow_group.c
index a7371f3aad..f76986b178 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_group.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_group.c
@@ -53,3 +53,47 @@ int flow_group_handle_destroy(void **handle)
return 0;
}
+
+int flow_group_translate_get(void *handle, uint8_t owner_id, uint8_t port_id, uint32_t group_in,
+ uint32_t *group_out)
+{
+ struct group_handle_s *group_handle = (struct group_handle_s *)handle;
+ uint32_t *table_ptr;
+ uint32_t lookup;
+
+ if (group_handle == NULL || group_in >= group_handle->group_count || port_id >= PORT_COUNT)
+ return -1;
+
+ /* Don't translate group 0 */
+ if (group_in == 0) {
+ *group_out = 0;
+ return 0;
+ }
+
+ table_ptr = &group_handle->translation_table[port_id * OWNER_ID_COUNT * PORT_COUNT +
+ owner_id * OWNER_ID_COUNT + group_in];
+ lookup = *table_ptr;
+
+ if (lookup == 0) {
+ for (lookup = 1; lookup < group_handle->group_count &&
+ group_handle->lookup_entries[lookup].ref_counter > 0;
+ ++lookup)
+ ;
+
+ if (lookup < group_handle->group_count) {
+ group_handle->lookup_entries[lookup].reverse_lookup = table_ptr;
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+
+ *table_ptr = lookup;
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ group_handle->lookup_entries[lookup].ref_counter += 1;
+ }
+
+ *group_out = lookup;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 9b46848e59..5635ac4524 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -4,6 +4,7 @@
*/
#include <pthread.h>
+#include <stdint.h>
#include <stdlib.h>
#include <string.h>
@@ -11,6 +12,10 @@
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
+#define NTNIC_ARRAY_MASK (NTNIC_ARRAY_SIZE - 1)
+#define NTNIC_MAX_ID (NTNIC_ARRAY_SIZE * NTNIC_ARRAY_SIZE)
+#define NTNIC_MAX_ID_MASK (NTNIC_MAX_ID - 1)
+#define NTNIC_MIN_FREE 1000
struct ntnic_id_table_element {
union flm_handles handle;
@@ -29,6 +34,36 @@ struct ntnic_id_table_data {
uint32_t free_count;
};
+static inline struct ntnic_id_table_element *
+ntnic_id_table_array_find_element(struct ntnic_id_table_data *handle, uint32_t id)
+{
+ uint32_t idx_d1 = id & NTNIC_ARRAY_MASK;
+ uint32_t idx_d2 = (id >> NTNIC_ARRAY_BITS) & NTNIC_ARRAY_MASK;
+
+ if (handle->arrays[idx_d2] == NULL) {
+ handle->arrays[idx_d2] =
+ calloc(NTNIC_ARRAY_SIZE, sizeof(struct ntnic_id_table_element));
+ }
+
+ return &handle->arrays[idx_d2][idx_d1];
+}
+
+static inline uint32_t ntnic_id_table_array_pop_free_id(struct ntnic_id_table_data *handle)
+{
+ uint32_t id = 0;
+
+ if (handle->free_count > NTNIC_MIN_FREE) {
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_tail);
+ id = handle->free_tail;
+
+ handle->free_tail = element->handle.idx & NTNIC_MAX_ID_MASK;
+ handle->free_count -= 1;
+ }
+
+ return id;
+}
+
void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
@@ -50,3 +85,47 @@ void ntnic_id_table_destroy(void *id_table)
free(id_table);
}
+
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
+
+ if (new_id == 0)
+ new_id = handle->next_id++;
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, new_id);
+ element->caller_id = caller_id;
+ element->type = type;
+ memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+
+ return new_id;
+}
+
+void ntnic_id_table_free_id(void *id_table, uint32_t id)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *current_element =
+ ntnic_id_table_array_find_element(handle, id);
+ memset(current_element, 0, sizeof(struct ntnic_id_table_element));
+
+ struct ntnic_id_table_element *element =
+ ntnic_id_table_array_find_element(handle, handle->free_head);
+ element->handle.idx = id;
+ handle->free_head = id;
+ handle->free_count += 1;
+
+ if (handle->free_tail == 0)
+ handle->free_tail = handle->free_head;
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index 13455f1165..e190fe4a11 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -16,4 +16,8 @@ union flm_handles {
void *ntnic_id_table_create(void);
void ntnic_id_table_destroy(void *id_table);
+uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t caller_id,
+ uint8_t type);
+void ntnic_id_table_free_id(void *id_table, uint32_t id);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
new file mode 100644
index 0000000000..ad7efafe08
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+
+#include "hw_mod_flm_v25.h"
+
+#include "flm_lrn_queue.h"
+
+#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ unsigned int n = rte_ring_enqueue_zc_burst_elem_start(q, ELEM_SIZE, 1, &zcd, NULL);
+ return (n == 0) ? NULL : zcd.ptr1;
+}
+
+void flm_lrn_queue_release_write_buffer(void *q)
+{
+ rte_ring_enqueue_zc_elem_finish(q, 1);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
new file mode 100644
index 0000000000..8cee0c8e78
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -0,0 +1,14 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_LRN_QUEUE_H_
+#define _FLM_LRN_QUEUE_H_
+
+#include <stdint.h>
+
+uint32_t *flm_lrn_queue_get_write_buffer(void *q);
+void flm_lrn_queue_release_write_buffer(void *q);
+
+#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5fda11183c..4ea9387c80 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -3,7 +3,11 @@
*/
+#include "hw_mod_backend.h"
+#include "flow_api_engine.h"
+
#include "flow_api_hw_db_inline.h"
+#include "rte_common.h"
/******************************************************************************/
/* Handle */
@@ -57,3 +61,92 @@ void hw_db_inline_destroy(void *db_handle)
free(db);
}
+
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size)
+{
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_COT:
+ hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/******************************************************************************/
+/* COT */
+/******************************************************************************/
+
+static int hw_db_inline_cot_compare(const struct hw_db_inline_cot_data *data1,
+ const struct hw_db_inline_cot_data *data2)
+{
+ return data1->matcher_color_contrib == data2->matcher_color_contrib &&
+ data1->frag_rcp == data2->frag_rcp;
+}
+
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cot_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_COT;
+
+ for (uint32_t i = 1; i < db->nb_cot; ++i) {
+ int ref = db->cot[i].ref;
+
+ if (ref > 0 && hw_db_inline_cot_compare(data, &db->cot[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cot_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cot[idx.ids].ref = 1;
+ memcpy(&db->cot[idx.ids].data, data, sizeof(struct hw_db_inline_cot_data));
+
+ return idx;
+}
+
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cot[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_handle,
+ struct hw_db_cot_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cot[idx.ids].ref -= 1;
+
+ if (db->cot[idx.ids].ref <= 0) {
+ memset(&db->cot[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cot_data));
+ db->cot[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 23caf73cf3..0116af015d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -9,15 +9,79 @@
#include "flow_api.h"
+#define HW_DB_INLINE_MAX_QST_PER_QSL 128
+#define HW_DB_INLINE_MAX_ENCAP_SIZE 128
+
+#define HW_DB_IDX \
+ union { \
+ struct { \
+ uint32_t id1 : 8; \
+ uint32_t id2 : 8; \
+ uint32_t id3 : 8; \
+ uint32_t type : 7; \
+ uint32_t error : 1; \
+ }; \
+ struct { \
+ uint32_t ids : 24; \
+ }; \
+ uint32_t raw; \
+ }
+
+/* Strongly typed int types */
+struct hw_db_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_cot_idx {
+ HW_DB_IDX;
+};
+
+enum hw_db_idx_type {
+ HW_DB_IDX_TYPE_NONE = 0,
+ HW_DB_IDX_TYPE_COT,
+};
+
+/* Functionality data types */
+struct hw_db_inline_qsl_data {
+ uint32_t discard : 1;
+ uint32_t drop : 1;
+ uint32_t table_size : 7;
+ uint32_t retransmit : 1;
+ uint32_t padding : 22;
+
+ struct {
+ uint16_t queue : 7;
+ uint16_t queue_en : 1;
+ uint16_t tx_port : 3;
+ uint16_t tx_port_en : 1;
+ uint16_t padding : 4;
+ } table[HW_DB_INLINE_MAX_QST_PER_QSL];
+};
+
struct hw_db_inline_cot_data {
uint32_t matcher_color_contrib : 4;
uint32_t frag_rcp : 4;
uint32_t padding : 24;
};
+struct hw_db_inline_hsh_data {
+ uint32_t func;
+ uint64_t hash_mask;
+ uint8_t key[MAX_RSS_KEY_LEN];
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
void hw_db_inline_destroy(void *db_handle);
+void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
+ uint32_t size);
+
+/**/
+struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cot_data *data);
+void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 6d91678c56..d61912d49d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4,12 +4,545 @@
*/
#include "ntlog.h"
+#include "nt_util.h"
+
+#include "hw_mod_backend.h"
+#include "flm_lrn_queue.h"
+#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
#include "flow_id_table.h"
+#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_common.h>
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
+static void *flm_lrn_queue_arr;
+
+struct flm_flow_key_def_s {
+ union {
+ struct {
+ uint64_t qw0_dyn : 7;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 7;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 7;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 7;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_proto : 1;
+ uint64_t inner_proto : 1;
+ uint64_t pad : 2;
+ };
+ uint64_t data;
+ };
+ uint32_t mask[10];
+};
+
+/*
+ * Flow Matcher functionality
+ */
+static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
+{
+ struct flow_eth_dev *dev = ndev->eth_base;
+
+ while (dev) {
+ if (dev->port_id == port_id)
+ return dev->port;
+
+ dev = dev->next;
+ }
+
+ return UINT8_MAX;
+}
+
+static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base)
+ ndev->flow_base->prev = fh;
+
+ fh->next = ndev->flow_base;
+ fh->prev = NULL;
+ ndev->flow_base = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ struct flow_handle *next = fh->next;
+ struct flow_handle *prev = fh->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base == fh) {
+ ndev->flow_base = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
+{
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (ndev->flow_base_flm)
+ ndev->flow_base_flm->prev = fh;
+
+ fh->next = ndev->flow_base_flm;
+ fh->prev = NULL;
+ ndev->flow_base_flm = fh;
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
+{
+ struct flow_handle *next = fh_flm->next;
+ struct flow_handle *prev = fh_flm->prev;
+
+ pthread_mutex_lock(&ndev->flow_mtx);
+
+ if (next && prev) {
+ prev->next = next;
+ next->prev = prev;
+
+ } else if (next) {
+ ndev->flow_base_flm = next;
+ next->prev = NULL;
+
+ } else if (prev) {
+ prev->next = NULL;
+
+ } else if (ndev->flow_base_flm == fh_flm) {
+ ndev->flow_base_flm = NULL;
+ }
+
+ pthread_mutex_unlock(&ndev->flow_mtx);
+}
+
+static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
+{
+ if (fd) {
+ fd->full_offload = -1;
+ fd->in_port_override = -1;
+ fd->mark = UINT32_MAX;
+ fd->jump_to_group = UINT32_MAX;
+
+ fd->l2_prot = -1;
+ fd->l3_prot = -1;
+ fd->l4_prot = -1;
+ fd->vlans = 0;
+ fd->tunnel_prot = -1;
+ fd->tunnel_l3_prot = -1;
+ fd->tunnel_l4_prot = -1;
+ fd->fragmentation = -1;
+ fd->ip_prot = -1;
+ fd->tunnel_ip_prot = -1;
+
+ fd->non_empty = -1;
+ }
+
+ return fd;
+}
+
+static inline struct nic_flow_def *allocate_nic_flow_def(void)
+{
+ return prepare_nic_flow_def(calloc(1, sizeof(struct nic_flow_def)));
+}
+
+static bool fd_has_empty_pattern(const struct nic_flow_def *fd)
+{
+ return fd && fd->vlans == 0 && fd->l2_prot < 0 && fd->l3_prot < 0 && fd->l4_prot < 0 &&
+ fd->tunnel_prot < 0 && fd->tunnel_l3_prot < 0 && fd->tunnel_l4_prot < 0 &&
+ fd->ip_prot < 0 && fd->tunnel_ip_prot < 0 && fd->non_empty < 0;
+}
+
+static inline const void *memcpy_mask_if(void *dest, const void *src, const void *mask,
+ size_t count)
+{
+ if (mask == NULL)
+ return src;
+
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+ const unsigned char *mask_ptr = (const unsigned char *)mask;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] = src_ptr[i] & mask_ptr[i];
+
+ return dest;
+}
+
+static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLM)
+ return -1;
+
+ if (flm_op == NT_FLM_OP_LEARN) {
+ union flm_handles flm_h;
+ flm_h.p = fh;
+ fh->flm_id = ntnic_id_table_get_id(fh->dev->ndev->id_table_handle, flm_h,
+ fh->caller_id, 1);
+ }
+
+ uint32_t flm_id = fh->flm_id;
+
+ if (flm_op == NT_FLM_OP_UNLEARN) {
+ ntnic_id_table_free_id(fh->dev->ndev->id_table_handle, flm_id);
+
+ if (fh->learn_ignored == 1)
+ return 0;
+ }
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->id = flm_id;
+
+ learn_record->qw0[0] = fh->flm_data[9];
+ learn_record->qw0[1] = fh->flm_data[8];
+ learn_record->qw0[2] = fh->flm_data[7];
+ learn_record->qw0[3] = fh->flm_data[6];
+ learn_record->qw4[0] = fh->flm_data[5];
+ learn_record->qw4[1] = fh->flm_data[4];
+ learn_record->qw4[2] = fh->flm_data[3];
+ learn_record->qw4[3] = fh->flm_data[2];
+ learn_record->sw8 = fh->flm_data[1];
+ learn_record->sw9 = fh->flm_data[0];
+ learn_record->prot = fh->flm_prot;
+
+ /* Last non-zero mtr is used for statistics */
+ uint8_t mbrs = 0;
+
+ learn_record->vol_idx = mbrs;
+
+ learn_record->nat_ip = fh->flm_nat_ipv4;
+ learn_record->nat_port = fh->flm_nat_port;
+ learn_record->nat_en = fh->flm_nat_ipv4 || fh->flm_nat_port ? 1 : 0;
+
+ learn_record->dscp = fh->flm_dscp;
+ learn_record->teid = fh->flm_teid;
+ learn_record->qfi = fh->flm_qfi;
+ learn_record->rqi = fh->flm_rqi;
+ /* Lower 10 bits used for RPL EXT PTR */
+ learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+
+ learn_record->ent = 0;
+ learn_record->op = flm_op & 0xf;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->prio = fh->flm_prio & 0x3;
+ learn_record->ft = fh->flm_ft;
+ learn_record->kid = fh->flm_kid;
+ learn_record->eor = 1;
+ learn_record->scrub_prof = 0;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+ return 0;
+}
+
+/*
+ * This function must be callable without locking any mutexes
+ */
+static int interpret_flow_actions(const struct flow_eth_dev *dev,
+ const struct rte_flow_action action[],
+ const struct rte_flow_action *action_mask,
+ struct nic_flow_def *fd,
+ struct rte_flow_error *error,
+ uint32_t *num_dest_port,
+ uint32_t *num_queues)
+{
+ unsigned int encap_decap_order = 0;
+
+ *num_dest_port = 0;
+ *num_queues = 0;
+
+ if (action == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow actions missing");
+ return -1;
+ }
+
+ /*
+ * Gather flow match + actions and convert into internal flow definition structure (struct
+ * nic_flow_def_s) This is the 1st step in the flow creation - validate, convert and
+ * prepare
+ */
+ for (int aidx = 0; action[aidx].type != RTE_FLOW_ACTION_TYPE_END; ++aidx) {
+ switch (action[aidx].type) {
+ case RTE_FLOW_ACTION_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_PORT_ID", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_port_id port_id_tmp;
+ const struct rte_flow_action_port_id *port_id =
+ memcpy_mask_if(&port_id_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_port_id));
+
+ if (*num_dest_port > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple port_id actions for one flow is not supported");
+ flow_nic_set_error(ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED,
+ error);
+ return -1;
+ }
+
+ uint8_t port = get_port_from_port_id(dev->ndev, port_id->id);
+
+ if (fd->dst_num_avail == MAX_OUTPUT_DEST) {
+ NT_LOG(ERR, FILTER, "Too many output destinations");
+ flow_nic_set_error(ERR_OUTPUT_TOO_MANY, error);
+ return -1;
+ }
+
+ if (port >= dev->ndev->be.num_phy_ports) {
+ NT_LOG(ERR, FILTER, "Phy port out of range");
+ flow_nic_set_error(ERR_OUTPUT_INVALID, error);
+ return -1;
+ }
+
+ /* New destination port to add */
+ fd->dst_id[fd->dst_num_avail].owning_port_id = port_id->id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_PHY;
+ fd->dst_id[fd->dst_num_avail].id = (int)port;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ if (fd->full_offload < 0)
+ fd->full_offload = 1;
+
+ *num_dest_port += 1;
+
+ NT_LOG(DBG, FILTER, "Phy port ID: %i", (int)port);
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
+ action[aidx].type);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+ }
+
+ if (!(encap_decap_order == 0 || encap_decap_order == 2)) {
+ NT_LOG(ERR, FILTER, "Invalid encap/decap actions");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int interpret_flow_elements(const struct flow_eth_dev *dev,
+ const struct rte_flow_item elem[],
+ struct nic_flow_def *fd __rte_unused,
+ struct rte_flow_error *error,
+ uint16_t implicit_vlan_vid __rte_unused,
+ uint32_t *in_port_id,
+ uint32_t *packet_data,
+ uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
+{
+ *in_port_id = UINT32_MAX;
+
+ memset(packet_data, 0x0, sizeof(uint32_t) * 10);
+ memset(packet_mask, 0x0, sizeof(uint32_t) * 10);
+ memset(key_def, 0x0, sizeof(struct flm_flow_key_def_s));
+
+ if (elem == NULL) {
+ flow_nic_set_error(ERR_FAILED, error);
+ NT_LOG(ERR, FILTER, "Flow items missing");
+ return -1;
+ }
+
+ int qw_reserved_mac = 0;
+ int qw_reserved_ipv6 = 0;
+
+ int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
+
+ if (qw_free < 0) {
+ NT_LOG(ERR, FILTER, "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ANY:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
+ (int)elem[eidx].type);
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
+ uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
+ uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
+ uint32_t priority __rte_unused)
+{
+ struct nic_flow_def *fd;
+ struct flow_handle fh_copy;
+
+ if (fh->type != FLOW_HANDLE_TYPE_FLOW)
+ return -1;
+
+ memcpy(&fh_copy, fh, sizeof(struct flow_handle));
+ memset(fh, 0x0, sizeof(struct flow_handle));
+ fd = fh_copy.fd;
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->caller_id = fh_copy.caller_id;
+ fh->dev = fh_copy.dev;
+ fh->next = fh_copy.next;
+ fh->prev = fh_copy.prev;
+ fh->user_data = fh_copy.user_data;
+
+ fh->flm_db_idx_counter = fh_copy.db_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+
+ free(fd);
+
+ return 0;
+}
+
+static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
+ const struct nic_flow_def *fd __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ uint32_t group __rte_unused,
+ uint32_t local_idxs[] __rte_unused,
+ uint32_t *local_idx_counter __rte_unused,
+ uint16_t *flm_rpl_ext_ptr __rte_unused,
+ uint32_t *flm_ft __rte_unused,
+ uint32_t *flm_scrub __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ return 0;
+}
+
+static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct nic_flow_def *fd,
+ const struct rte_flow_attr *attr,
+ uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
+ struct rte_flow_error *error, uint32_t port_id,
+ uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ struct flm_flow_key_def_s *key_def __rte_unused)
+{
+ struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLOW;
+ fh->port_id = port_id;
+ fh->dev = dev;
+ fh->fd = fd;
+ fh->caller_id = caller_id;
+
+ struct hw_db_inline_qsl_data qsl_data;
+
+ struct hw_db_inline_hsh_data hsh_data;
+
+ if (attr->group > 0 && fd_has_empty_pattern(fd)) {
+ /*
+ * Default flow for group 1..32
+ */
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, NULL, NULL, NULL, error)) {
+ goto error_out;
+ }
+
+ nic_insert_flow(dev->ndev, fh);
+
+ } else if (attr->group > 0) {
+ /*
+ * Flow for group 1..32
+ */
+
+ /* Setup Actions */
+ uint16_t flm_rpl_ext_ptr = 0;
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, attr->group, fh->db_idxs,
+ &fh->db_idx_counter, &flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Program flow */
+ convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ flm_scrub, attr->priority & 0x3);
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ } else {
+ /*
+ * Flow for group 0
+ */
+ nic_insert_flow(dev->ndev, fh);
+ }
+
+ return fh;
+
+error_out:
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ } else {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ }
+
+ free(fh);
+
+ return NULL;
+}
/*
* Public functions
@@ -82,6 +615,92 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev __rte_un
const struct rte_flow_action action[] __rte_unused,
struct rte_flow_error *error __rte_unused)
{
+ struct flow_handle *fh = NULL;
+ int res;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t num_dest_port;
+ uint32_t num_queues;
+
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct rte_flow_attr attr_local;
+ memcpy(&attr_local, attr, sizeof(struct rte_flow_attr));
+ uint16_t forced_vlan_vid_local = forced_vlan_vid;
+ uint16_t caller_id_local = caller_id;
+
+ if (attr_local.group > 0)
+ forced_vlan_vid_local = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL)
+ goto err_exit;
+
+ res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res)
+ goto err_exit;
+
+ res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
+ packet_data, packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ fd->jump_to_group, &fd->jump_to_group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (attr_local.group > 0 &&
+ flow_group_translate_get(dev->ndev->group_handle, caller_id_local, dev->port,
+ attr_local.group, &attr_local.group)) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto err_exit;
+ }
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ /* Create and flush filter to NIC */
+ fh = create_flow_filter(dev, fd, &attr_local, forced_vlan_vid_local,
+ caller_id_local, error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ if (!fh)
+ goto err_exit;
+
+ NT_LOG(DBG, FILTER, "New FlOW: fh (flow handle) %p, fd (flow definition) %p", fh, fd);
+ NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
+ dev, dev->ndev->adapter_no, dev->port, fh, fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return fh;
+
+err_exit:
+
+ if (fh)
+ flow_destroy_locked_profile_inline(dev, fh, NULL);
+
+ else
+ free(fd);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
}
@@ -96,6 +715,44 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
flow_nic_set_error(ERR_SUCCESS, error);
+ /* take flow out of ndev list - may not have been put there yet */
+ if (fh->type == FLOW_HANDLE_TYPE_FLM)
+ nic_remove_flow_flm(dev->ndev, fh);
+
+ else
+ nic_remove_flow(dev->ndev, fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_WRITE);
+#endif
+
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ flm_flow_programming(fh, NT_FLM_OP_UNLEARN);
+
+ } else {
+ NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
+ free(fh->fd);
+ }
+
+ if (err) {
+ NT_LOG(ERR, FILTER, "FAILED removing flow: %p", fh);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ }
+
+ free(fh);
+
+#ifdef FLOW_DEBUG
+ dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
+#endif
+
return err;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 09/80] net/ntnic: add infrastructure for for flow actions and items
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (7 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 08/80] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 10/80] net/ntnic: add action queue Serhii Iliushyk
` (70 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add etities(utilities, structures, etc) required for flow support
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Change cast to void with __rte_unused
---
drivers/net/ntnic/include/flow_api.h | 34 ++++++++
drivers/net/ntnic/include/flow_api_engine.h | 46 +++++++++++
drivers/net/ntnic/include/hw_mod_backend.h | 44 ++++++++++
drivers/net/ntnic/nthw/flow_api/flow_km.c | 81 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 68 +++++++++++++++-
5 files changed, 269 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 667dad6d5f..7f031ccda8 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -85,13 +85,47 @@ struct flow_nic_dev {
enum flow_nic_err_msg_e {
ERR_SUCCESS = 0,
ERR_FAILED = 1,
+ ERR_MEMORY = 2,
ERR_OUTPUT_TOO_MANY = 3,
+ ERR_RSS_TOO_MANY_QUEUES = 4,
+ ERR_VLAN_TYPE_NOT_SUPPORTED = 5,
+ ERR_VXLAN_HEADER_NOT_ACCEPTED = 6,
+ ERR_VXLAN_POP_INVALID_RECIRC_PORT = 7,
+ ERR_VXLAN_POP_FAILED_CREATING_VTEP = 8,
+ ERR_MATCH_VLAN_TOO_MANY = 9,
+ ERR_MATCH_INVALID_IPV6_HDR = 10,
+ ERR_MATCH_TOO_MANY_TUNNEL_PORTS = 11,
ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM = 12,
+ ERR_MATCH_FAILED_BY_HW_LIMITS = 13,
ERR_MATCH_RESOURCE_EXHAUSTION = 14,
+ ERR_MATCH_FAILED_TOO_COMPLEX = 15,
+ ERR_ACTION_REPLICATION_FAILED = 16,
+ ERR_ACTION_OUTPUT_RESOURCE_EXHAUSTION = 17,
+ ERR_ACTION_TUNNEL_HEADER_PUSH_OUTPUT_LIMIT = 18,
+ ERR_ACTION_INLINE_MOD_RESOURCE_EXHAUSTION = 19,
+ ERR_ACTION_RETRANSMIT_RESOURCE_EXHAUSTION = 20,
+ ERR_ACTION_FLOW_COUNTER_EXHAUSTION = 21,
+ ERR_ACTION_INTERNAL_RESOURCE_EXHAUSTION = 22,
+ ERR_INTERNAL_QSL_COMPARE_FAILED = 23,
+ ERR_INTERNAL_CAT_FUNC_REUSE_FAILED = 24,
+ ERR_MATCH_ENTROPHY_FAILED = 25,
+ ERR_MATCH_CAM_EXHAUSTED = 26,
+ ERR_INTERNAL_VIRTUAL_PORT_CREATION_FAILED = 27,
ERR_ACTION_UNSUPPORTED = 28,
ERR_REMOVE_FLOW_FAILED = 29,
+ ERR_ACTION_NO_OUTPUT_DEFINED_USE_DEFAULT = 30,
+ ERR_ACTION_NO_OUTPUT_QUEUE_FOUND = 31,
+ ERR_MATCH_UNSUPPORTED_ETHER_TYPE = 32,
ERR_OUTPUT_INVALID = 33,
+ ERR_MATCH_PARTIAL_OFFLOAD_NOT_SUPPORTED = 34,
+ ERR_MATCH_CAT_CAM_EXHAUSTED = 35,
+ ERR_MATCH_KCC_KEY_CLASH = 36,
+ ERR_MATCH_CAT_CAM_FAILED = 37,
+ ERR_PARTIAL_FLOW_MARK_TOO_BIG = 38,
+ ERR_FLOW_PRIORITY_VALUE_INVALID = 39,
ERR_ACTION_MULTIPLE_PORT_ID_UNSUPPORTED = 40,
+ ERR_RSS_TOO_LONG_KEY = 41,
+ ERR_ACTION_AGE_UNSUPPORTED_GROUP_0 = 42,
ERR_MSG_NO_MSG
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b8da5eafba..13fad2760a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -54,6 +54,30 @@ enum res_type_e {
#define MAX_CPY_WRITERS_SUPPORTED 8
+#define MAX_MATCH_FIELDS 16
+
+struct match_elem_s {
+ int masked_for_tcam; /* if potentially selected for TCAM */
+ uint32_t e_word[4];
+ uint32_t e_mask[4];
+
+ int extr_start_offs_id;
+ int8_t rel_offs;
+ uint32_t word_len;
+};
+
+struct km_flow_def_s {
+ struct flow_api_backend_s *be;
+
+ /* For collect flow elements and sorting */
+ struct match_elem_s match[MAX_MATCH_FIELDS];
+ int num_ftype_elem;
+
+ /* Flow information */
+ /* HW input port ID needed for compare. In port must be identical on flow types */
+ uint32_t port_id;
+};
+
enum flow_port_type_e {
PORT_NONE, /* not defined or drop */
PORT_INTERNAL, /* no queues attached */
@@ -99,6 +123,25 @@ struct nic_flow_def {
uint32_t jump_to_group;
int full_offload;
+
+ /*
+ * Modify field
+ */
+ struct {
+ uint32_t select;
+ union {
+ uint8_t value8[16];
+ uint16_t value16[8];
+ uint32_t value32[4];
+ };
+ } modify_field[MAX_CPY_WRITERS_SUPPORTED];
+
+ uint32_t modify_field_count;
+
+ /*
+ * Key Matcher flow definitions
+ */
+ struct km_flow_def_s km;
};
enum flow_handle_type {
@@ -159,6 +202,9 @@ struct flow_handle {
void km_free_ndev_resource_management(void **handle);
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start, int8_t offset);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 34154c65f8..22430bb3db 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,17 @@ enum {
} \
} while (0)
+enum frame_offs_e {
+ DYN_L2 = 1,
+ DYN_L3 = 4,
+ DYN_L4 = 7,
+ DYN_L4_PAYLOAD = 8,
+ DYN_TUN_L3 = 13,
+ DYN_TUN_L4 = 16,
+};
+
+/* Sideband info bit indicator */
+
enum km_flm_if_select_e {
KM_FLM_IF_FIRST = 0,
KM_FLM_IF_SECOND = 1
@@ -133,6 +144,39 @@ enum km_flm_if_select_e {
unsigned int alloced_size; \
int debug
+enum {
+ PROT_OTHER = 0,
+ PROT_L2_ETH2 = 1,
+};
+
+enum {
+ PROT_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_L4_ICMP = 4
+};
+
+enum {
+ PROT_TUN_L3_OTHER = 0,
+ PROT_TUN_L3_IPV4 = 1,
+};
+
+enum {
+ PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_ICMP = 4
+};
+
+
+enum {
+ CPY_SELECT_DSCP_IPV4 = 0,
+ CPY_SELECT_DSCP_IPV6 = 1,
+ CPY_SELECT_RQI_QFI = 2,
+ CPY_SELECT_IPV4 = 3,
+ CPY_SELECT_PORT = 4,
+ CPY_SELECT_TEID = 5,
+};
+
struct common_func_s {
COMMON_FUNC_INFO_S;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index e04cd5e857..237e9f7b4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -3,10 +3,38 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <assert.h>
#include <stdlib.h>
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
+#include "nt_util.h"
+
+#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+
+static const struct cam_match_masks_s {
+ uint32_t word_len;
+ uint32_t key_mask[4];
+} cam_masks[] = {
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff } }, /* IP6_SRC, IP6_DST */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0xffff0000 } }, /* DMAC,SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffff0000, 0x00000000, 0xffff0000 } }, /* DMAC,ethtype */
+ { 4, { 0x00000000, 0x0000ffff, 0xffffffff, 0xffff0000 } }, /* SMAC,ethtype */
+ { 4, { 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000 } }, /* ETH_128 */
+ { 2, { 0xffffffff, 0xffffffff, 0x00000000, 0x00000000 } }, /* IP4_COMBINED */
+ /*
+ * ETH_TYPE, IP4_TTL_PROTO, IP4_SRC, IP4_DST, IP6_FLOW_TC,
+ * IP6_NEXT_HDR_HOP, TP_PORT_COMBINED, SIDEBAND_VNI
+ */
+ { 1, { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IP4_IHL_TOS, TP_PORT_SRC32_OR_ICMP, TCP_CTRL */
+ { 1, { 0xffff0000, 0x00000000, 0x00000000, 0x00000000 } },
+ { 1, { 0x0000ffff, 0x00000000, 0x00000000, 0x00000000 } }, /* TP_PORT_DST32 */
+ /* IPv4 TOS mask bits used often by OVS */
+ { 1, { 0x00030000, 0x00000000, 0x00000000, 0x00000000 } },
+ /* IPv6 TOS mask bits used often by OVS */
+ { 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
+};
void km_free_ndev_resource_management(void **handle)
{
@@ -17,3 +45,56 @@ void km_free_ndev_resource_management(void **handle)
*handle = NULL;
}
+
+int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
+ uint32_t word_len, enum frame_offs_e start_id, int8_t offset)
+{
+ /* valid word_len 1,2,4 */
+ if (word_len == 3) {
+ word_len = 4;
+ e_word[3] = 0;
+ e_mask[3] = 0;
+ }
+
+ if (word_len < 1 || word_len > 4) {
+ assert(0);
+ return -1;
+ }
+
+ for (unsigned int i = 0; i < word_len; i++) {
+ km->match[km->num_ftype_elem].e_word[i] = e_word[i];
+ km->match[km->num_ftype_elem].e_mask[i] = e_mask[i];
+ }
+
+ km->match[km->num_ftype_elem].word_len = word_len;
+ km->match[km->num_ftype_elem].rel_offs = offset;
+ km->match[km->num_ftype_elem].extr_start_offs_id = start_id;
+
+ /*
+ * Determine here if this flow may better be put into TCAM
+ * Otherwise it will go into CAM
+ * This is dependent on a cam_masks list defined above
+ */
+ km->match[km->num_ftype_elem].masked_for_tcam = 1;
+
+ for (unsigned int msk = 0; msk < NUM_CAM_MASKS; msk++) {
+ if (word_len == cam_masks[msk].word_len) {
+ int match = 1;
+
+ for (unsigned int wd = 0; wd < word_len; wd++) {
+ if (e_mask[wd] != cam_masks[msk].key_mask[wd]) {
+ match = 0;
+ break;
+ }
+ }
+
+ if (match) {
+ /* Can go into CAM */
+ km->match[km->num_ftype_elem].masked_for_tcam = 0;
+ }
+ }
+ }
+
+ km->num_ftype_elem++;
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index d61912d49d..1b6a01a7d4 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -416,10 +416,67 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return 0;
}
-static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data __rte_unused,
- uint32_t flm_key_id __rte_unused, uint32_t flm_ft __rte_unused,
- uint16_t rpl_ext_ptr __rte_unused, uint32_t flm_scrub __rte_unused,
- uint32_t priority __rte_unused)
+static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def *fd,
+ const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
+ uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
+{
+ switch (fd->l4_prot) {
+ case PROT_L4_ICMP:
+ fh->flm_prot = fd->ip_prot;
+ break;
+
+ default:
+ switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_ICMP:
+ fh->flm_prot = fd->tunnel_ip_prot;
+ break;
+
+ default:
+ fh->flm_prot = 0;
+ break;
+ }
+
+ break;
+ }
+
+ memcpy(fh->flm_data, packet_data, sizeof(uint32_t) * 10);
+
+ fh->flm_kid = flm_key_id;
+ fh->flm_rpl_ext_ptr = rpl_ext_ptr;
+ fh->flm_prio = (uint8_t)priority;
+ fh->flm_ft = (uint8_t)flm_ft;
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+ case CPY_SELECT_RQI_QFI:
+ fh->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ fh->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ fh->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ fh->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ fh->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+}
+
+static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
+ uint32_t flm_key_id, uint32_t flm_ft, uint16_t rpl_ext_ptr,
+ uint32_t flm_scrub, uint32_t priority)
{
struct nic_flow_def *fd;
struct flow_handle fh_copy;
@@ -443,6 +500,9 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
for (int i = 0; i < RES_COUNT; ++i)
fh->flm_db_idxs[i] = fh_copy.db_idxs[i];
+ copy_fd_to_fh_flm(fh, fd, packet_data, flm_key_id, flm_ft, rpl_ext_ptr, flm_scrub,
+ priority);
+
free(fd);
return 0;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 10/80] net/ntnic: add action queue
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (8 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 09/80] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 11/80] net/ntnic: add action mark Serhii Iliushyk
` (69 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_QUEUE.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 4 ++
doc/guides/rel_notes/release_24_11.rst | 1 +
.../profile_inline/flow_api_profile_inline.c | 37 +++++++++++++++++++
4 files changed, 43 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 1c653fd5a0..5b3c26da05 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,3 +18,4 @@ any = Y
[rte_flow actions]
port_id = Y
+queue = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index a6568cba4e..d43706b2ee 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -42,6 +42,10 @@ Features
- Promiscuous mode (Enable only. The device always run promiscuous mode)
- Flow API support.
- Support for multiple rte_flow groups.
+- Multiple TX and RX queues.
+- Scattered and gather for TX and RX.
+- Jumbo frame support.
+- Traffic mirroring.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index a235ce59d1..2cace179b3 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -162,6 +162,7 @@ New Features
* Added initialization of FPGA modules related to flow HW offload.
* Added basic handling of the virtual queues.
* Added flow handling support
+ * Enable virtual queues
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1b6a01a7d4..f4d4c25176 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -23,6 +23,15 @@
static void *flm_lrn_queue_arr;
+static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
+{
+ for (int i = 0; i < dev->num_queues; ++i)
+ if (dev->rx_queue[i].id == id)
+ return dev->rx_queue[i].hw_id;
+
+ return -1;
+}
+
struct flm_flow_key_def_s {
union {
struct {
@@ -349,6 +358,34 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_queue queue_tmp;
+ const struct rte_flow_action_queue *queue =
+ memcpy_mask_if(&queue_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_queue));
+
+ int hw_id = rx_queue_idx_to_hw_id(dev, queue->index);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RTE_FLOW_ACTION_TYPE_QUEUE port %u, queue index: %u, hw id %u",
+ dev, dev->port, queue->index, hw_id);
+
+ fd->full_offload = 0;
+ *num_queues += 1;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 11/80] net/ntnic: add action mark
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (9 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 10/80] net/ntnic: add action queue Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 12/80] net/ntnic: add ation jump Serhii Iliushyk
` (68 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_MARK.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 5b3c26da05..42ac9f9c31 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,5 +17,6 @@ x86-64 = Y
any = Y
[rte_flow actions]
+mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f4d4c25176..e8b31dbdd2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -386,6 +386,22 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_mark mark_tmp;
+ const struct rte_flow_action_mark *mark =
+ memcpy_mask_if(&mark_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_mark));
+
+ fd->mark = mark->id;
+ NT_LOG(DBG, FILTER, "Mark: %i", mark->id);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 12/80] net/ntnic: add ation jump
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (10 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 11/80] net/ntnic: add action mark Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 13/80] net/ntnic: add action drop Serhii Iliushyk
` (67 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_JUMP.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 42ac9f9c31..f3334fc86d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+jump = Y
mark = Y
port_id = Y
queue = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index e8b31dbdd2..9dfa211095 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -402,6 +402,23 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_jump jump_tmp;
+ const struct rte_flow_action_jump *jump =
+ memcpy_mask_if(&jump_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_jump));
+
+ fd->jump_to_group = jump->group;
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_JUMP: group %u",
+ dev, jump->group);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 13/80] net/ntnic: add action drop
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (11 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 12/80] net/ntnic: add ation jump Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 14/80] net/ntnic: add item eth Serhii Iliushyk
` (66 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ACTION_TYPE_DROP.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index f3334fc86d..372653695d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,6 +17,7 @@ x86-64 = Y
any = Y
[rte_flow actions]
+drop = Y
jump = Y
mark = Y
port_id = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9dfa211095..1d949b3b91 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -419,6 +419,18 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_DROP", dev);
+
+ if (action[aidx].conf) {
+ fd->dst_id[fd->dst_num_avail].owning_port_id = 0;
+ fd->dst_id[fd->dst_num_avail].id = 0;
+ fd->dst_id[fd->dst_num_avail].type = PORT_NONE;
+ fd->dst_num_avail++;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 14/80] net/ntnic: add item eth
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (12 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 13/80] net/ntnic: add action drop Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 15/80] net/ntnic: add item IPv4 Serhii Iliushyk
` (65 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ETH.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 12 ++
.../profile_inline/flow_api_profile_inline.c | 177 ++++++++++++++++++
3 files changed, 190 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 372653695d..36b8212bae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -15,6 +15,7 @@ x86-64 = Y
[rte_flow items]
any = Y
+eth = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 22430bb3db..0c22129fb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -120,6 +120,18 @@ enum {
} \
} while (0)
+static inline int is_non_zero(const void *addr, size_t n)
+{
+ size_t i = 0;
+ const uint8_t *p = (const uint8_t *)addr;
+
+ for (i = 0; i < n; i++)
+ if (p[i] != 0)
+ return 1;
+
+ return 0;
+}
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_L3 = 4,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1d949b3b91..8ac1165738 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -55,6 +55,36 @@ struct flm_flow_key_def_s {
/*
* Flow Matcher functionality
*/
+static inline void set_key_def_qw(struct flm_flow_key_def_s *key_def, unsigned int qw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(qw < 2);
+
+ if (qw == 0) {
+ key_def->qw0_dyn = dyn & 0x7f;
+ key_def->qw0_ofs = ofs & 0xff;
+
+ } else {
+ key_def->qw4_dyn = dyn & 0x7f;
+ key_def->qw4_ofs = ofs & 0xff;
+ }
+}
+
+static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned int sw,
+ unsigned int dyn, unsigned int ofs)
+{
+ assert(sw < 2);
+
+ if (sw == 0) {
+ key_def->sw8_dyn = dyn & 0x7f;
+ key_def->sw8_ofs = ofs & 0xff;
+
+ } else {
+ key_def->sw9_dyn = dyn & 0x7f;
+ key_def->sw9_ofs = ofs & 0xff;
+ }
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -457,6 +487,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
uint32_t *packet_mask,
struct flm_flow_key_def_s *key_def)
{
+ uint32_t any_count = 0;
+
+ unsigned int qw_counter = 0;
+ unsigned int sw_counter = 0;
+
*in_port_id = UINT32_MAX;
memset(packet_data, 0x0, sizeof(uint32_t) * 10);
@@ -472,6 +507,28 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
+ for (int eidx = 0; elem[eidx].type != RTE_FLOW_ITEM_TYPE_END; ++eidx) {
+ switch (elem[eidx].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH: {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (eth_spec != NULL && eth_mask != NULL) {
+ if (is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6)) {
+ qw_reserved_mac += 1;
+ }
+ }
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
int qw_free = 2 - qw_reserved_mac - qw_reserved_ipv6;
if (qw_free < 0) {
@@ -485,6 +542,126 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ANY:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ANY",
dev->ndev->adapter_no, dev->port);
+ any_count += 1;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ETH",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_ether_hdr *eth_spec =
+ (const struct rte_ether_hdr *)elem[eidx].spec;
+ const struct rte_ether_hdr *eth_mask =
+ (const struct rte_ether_hdr *)elem[eidx].mask;
+
+ if (any_count > 0) {
+ NT_LOG(ERR, FILTER,
+ "Tunneled L2 ethernet not supported");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (eth_spec == NULL || eth_mask == NULL) {
+ fd->l2_prot = PROT_L2_ETH2;
+ break;
+ }
+
+ int non_zero = is_non_zero(eth_mask->dst_addr.addr_bytes, 6) ||
+ is_non_zero(eth_mask->src_addr.addr_bytes, 6);
+
+ if (non_zero ||
+ (eth_mask->ether_type != 0 && sw_counter >= 2)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ((eth_spec->dst_addr.addr_bytes[0] &
+ eth_mask->dst_addr.addr_bytes[0]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[1] &
+ eth_mask->dst_addr.addr_bytes[1]) << 16) +
+ ((eth_spec->dst_addr.addr_bytes[2] &
+ eth_mask->dst_addr.addr_bytes[2]) << 8) +
+ (eth_spec->dst_addr.addr_bytes[3] &
+ eth_mask->dst_addr.addr_bytes[3]);
+
+ qw_data[1] = ((eth_spec->dst_addr.addr_bytes[4] &
+ eth_mask->dst_addr.addr_bytes[4]) << 24) +
+ ((eth_spec->dst_addr.addr_bytes[5] &
+ eth_mask->dst_addr.addr_bytes[5]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[0] &
+ eth_mask->src_addr.addr_bytes[0]) << 8) +
+ (eth_spec->src_addr.addr_bytes[1] &
+ eth_mask->src_addr.addr_bytes[1]);
+
+ qw_data[2] = ((eth_spec->src_addr.addr_bytes[2] &
+ eth_mask->src_addr.addr_bytes[2]) << 24) +
+ ((eth_spec->src_addr.addr_bytes[3] &
+ eth_mask->src_addr.addr_bytes[3]) << 16) +
+ ((eth_spec->src_addr.addr_bytes[4] &
+ eth_mask->src_addr.addr_bytes[4]) << 8) +
+ (eth_spec->src_addr.addr_bytes[5] &
+ eth_mask->src_addr.addr_bytes[5]);
+
+ qw_data[3] = ntohs(eth_spec->ether_type &
+ eth_mask->ether_type) << 16;
+
+ qw_mask[0] = (eth_mask->dst_addr.addr_bytes[0] << 24) +
+ (eth_mask->dst_addr.addr_bytes[1] << 16) +
+ (eth_mask->dst_addr.addr_bytes[2] << 8) +
+ eth_mask->dst_addr.addr_bytes[3];
+
+ qw_mask[1] = (eth_mask->dst_addr.addr_bytes[4] << 24) +
+ (eth_mask->dst_addr.addr_bytes[5] << 16) +
+ (eth_mask->src_addr.addr_bytes[0] << 8) +
+ eth_mask->src_addr.addr_bytes[1];
+
+ qw_mask[2] = (eth_mask->src_addr.addr_bytes[2] << 24) +
+ (eth_mask->src_addr.addr_bytes[3] << 16) +
+ (eth_mask->src_addr.addr_bytes[4] << 8) +
+ eth_mask->src_addr.addr_bytes[5];
+
+ qw_mask[3] = ntohs(eth_mask->ether_type) << 16;
+
+ km_add_match_elem(&fd->km,
+ &qw_data[(size_t)(qw_counter * 4)],
+ &qw_mask[(size_t)(qw_counter * 4)], 4, DYN_L2, 0);
+ set_key_def_qw(key_def, qw_counter, DYN_L2, 0);
+ qw_counter += 1;
+
+ if (!non_zero)
+ qw_free -= 1;
+
+ } else if (eth_mask->ether_type != 0) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(eth_mask->ether_type) << 16;
+ sw_data[0] = ntohs(eth_spec->ether_type) << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, DYN_L2, 12);
+ set_key_def_sw(key_def, sw_counter, DYN_L2, 12);
+ sw_counter += 1;
+ }
+
+ fd->l2_prot = PROT_L2_ETH2;
+ }
+
break;
default:
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 15/80] net/ntnic: add item IPv4
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (13 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 14/80] net/ntnic: add item eth Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 16/80] net/ntnic: add item ICMP Serhii Iliushyk
` (64 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_IPV4.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v5
* Remove redundant 'break'.
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 163 ++++++++++++++++++
2 files changed, 164 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 36b8212bae..bae25d2e2d 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+ipv4 = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 8ac1165738..eb73e6ca22 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -664,6 +664,169 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv4 *ipv4_spec =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask =
+ (const struct rte_flow_item_ipv4 *)elem[eidx].mask;
+
+ if (ipv4_spec == NULL || ipv4_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.version_ihl != 0 ||
+ ipv4_mask->hdr.type_of_service != 0 ||
+ ipv4_mask->hdr.total_length != 0 ||
+ ipv4_mask->hdr.packet_id != 0 ||
+ (ipv4_mask->hdr.fragment_offset != 0 &&
+ (ipv4_spec->hdr.fragment_offset != 0xffff ||
+ ipv4_mask->hdr.fragment_offset != 0xffff)) ||
+ ipv4_mask->hdr.time_to_live != 0 ||
+ ipv4_mask->hdr.hdr_checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv4 field not support by running SW version.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (ipv4_spec->hdr.fragment_offset == 0xffff &&
+ ipv4_mask->hdr.fragment_offset == 0xffff) {
+ fd->fragmentation = 0xfe;
+ }
+
+ int match_cnt = (ipv4_mask->hdr.src_addr != 0) +
+ (ipv4_mask->hdr.dst_addr != 0) +
+ (ipv4_mask->hdr.next_proto_id != 0);
+
+ if (match_cnt <= 0) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (qw_free > 0 &&
+ (match_cnt >= 2 ||
+ (match_cnt == 1 && sw_counter >= 2))) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED,
+ error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_mask[0] = 0;
+ qw_data[0] = 0;
+
+ qw_mask[1] = ipv4_mask->hdr.next_proto_id << 16;
+ qw_data[1] = ipv4_spec->hdr.next_proto_id
+ << 16 & qw_mask[1];
+
+ qw_mask[2] = ntohl(ipv4_mask->hdr.src_addr);
+ qw_mask[3] = ntohl(ipv4_mask->hdr.dst_addr);
+
+ qw_data[2] = ntohl(ipv4_spec->hdr.src_addr) & qw_mask[2];
+ qw_data[3] = ntohl(ipv4_spec->hdr.dst_addr) & qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ break;
+ }
+
+ if (ipv4_mask->hdr.src_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.src_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.src_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 12);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 12);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.dst_addr) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(ipv4_mask->hdr.dst_addr);
+ sw_data[0] = ntohl(ipv4_spec->hdr.dst_addr) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 16);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 16);
+ sw_counter += 1;
+ }
+
+ if (ipv4_mask->hdr.next_proto_id) {
+ if (sw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv4_mask->hdr.next_proto_id << 16;
+ sw_data[0] = ipv4_spec->hdr.next_proto_id
+ << 16 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ sw_counter += 1;
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV4;
+
+ else
+ fd->l3_prot = PROT_L3_IPV4;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
(int)elem[eidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 16/80] net/ntnic: add item ICMP
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (14 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 15/80] net/ntnic: add item IPv4 Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 17/80] net/ntnic: add item port ID Serhii Iliushyk
` (63 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_ICMP.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../profile_inline/flow_api_profile_inline.c | 101 ++++++++++++++++++
2 files changed, 102 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index bae25d2e2d..d403ea01f3 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,7 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+icmp = Y
ipv4 = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index eb73e6ca22..88665dbf15 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -827,6 +827,107 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp *icmp_spec =
+ (const struct rte_flow_item_icmp *)elem[eidx].spec;
+ const struct rte_flow_item_icmp *icmp_mask =
+ (const struct rte_flow_item_icmp *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->hdr.icmp_cksum != 0 ||
+ icmp_mask->hdr.icmp_ident != 0 ||
+ icmp_mask->hdr.icmp_seq_nb != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->hdr.icmp_type || icmp_mask->hdr.icmp_code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ sw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter,
+ any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->hdr.icmp_type << 24 |
+ icmp_spec->hdr.icmp_code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->hdr.icmp_type << 24 |
+ icmp_mask->hdr.icmp_code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 1;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 1;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
(int)elem[eidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 17/80] net/ntnic: add item port ID
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (15 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 16/80] net/ntnic: add item ICMP Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 18/80] net/ntnic: add item void Serhii Iliushyk
` (62 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_PORT_ID.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../flow_api/profile_inline/flow_api_profile_inline.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index d403ea01f3..cdf119c4ae 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -18,6 +18,7 @@ any = Y
eth = Y
icmp = Y
ipv4 = Y
+port_id = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 88665dbf15..4fc5afcdaa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -928,6 +928,17 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_PORT_ID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
+ dev->ndev->adapter_no, dev->port);
+
+ if (elem[eidx].spec) {
+ *in_port_id =
+ ((const struct rte_flow_item_port_id *)elem[eidx].spec)->id;
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
(int)elem[eidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 18/80] net/ntnic: add item void
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (16 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 17/80] net/ntnic: add item port ID Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 19/80] net/ntnic: add item UDP Serhii Iliushyk
` (61 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Add possibility to use RTE_FLOW_ITEM_TYPE_VOID.
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../nthw/flow_api/profile_inline/flow_api_profile_inline.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 4fc5afcdaa..29fe0c4b2f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -939,6 +939,11 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VOID:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VOID",
+ dev->ndev->adapter_no, dev->port);
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow request: %d",
(int)elem[eidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 19/80] net/ntnic: add item UDP
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (17 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 18/80] net/ntnic: add item void Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 20/80] net/ntnic: add action TCP Serhii Iliushyk
` (60 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_UDP.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 103 ++++++++++++++++++
3 files changed, 106 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index cdf119c4ae..61a3d87909 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+udp = Y
[rte_flow actions]
drop = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 0c22129fb4..a95fb69870 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 29fe0c4b2f..1a92de92bc 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -827,6 +827,101 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_udp *udp_spec =
+ (const struct rte_flow_item_udp *)elem[eidx].spec;
+ const struct rte_flow_item_udp *udp_mask =
+ (const struct rte_flow_item_udp *)elem[eidx].mask;
+
+ if (udp_spec == NULL || udp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (udp_mask->hdr.dgram_len != 0 ||
+ udp_mask->hdr.dgram_cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested UDP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (udp_mask->hdr.src_port || udp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(udp_mask->hdr.src_port) << 16) |
+ ntohs(udp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(udp_spec->hdr.src_port)
+ << 16) | ntohs(udp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(udp_mask->hdr.src_port)
+ << 16) | ntohs(udp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_UDP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_UDP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -960,12 +1055,20 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_UDP:
+ fh->flm_prot = 17;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 20/80] net/ntnic: add action TCP
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (18 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 19/80] net/ntnic: add item UDP Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 21/80] net/ntnic: add action VLAN Serhii Iliushyk
` (59 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_TCP.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 108 ++++++++++++++++++
3 files changed, 111 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 61a3d87909..e3c3982895 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+tcp = Y
udp = Y
[rte_flow actions]
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a95fb69870..a1aa74caf5 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -166,6 +166,7 @@ enum {
};
enum {
+ PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
PROT_L4_ICMP = 4
};
@@ -177,6 +178,7 @@ enum {
enum {
PROT_TUN_L4_OTHER = 0,
+ PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 1a92de92bc..4c3844e9b8 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1023,6 +1023,106 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_tcp *tcp_spec =
+ (const struct rte_flow_item_tcp *)elem[eidx].spec;
+ const struct rte_flow_item_tcp *tcp_mask =
+ (const struct rte_flow_item_tcp *)elem[eidx].mask;
+
+ if (tcp_spec == NULL || tcp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (tcp_mask->hdr.sent_seq != 0 ||
+ tcp_mask->hdr.recv_ack != 0 ||
+ tcp_mask->hdr.data_off != 0 ||
+ tcp_mask->hdr.tcp_flags != 0 ||
+ tcp_mask->hdr.rx_win != 0 ||
+ tcp_mask->hdr.cksum != 0 ||
+ tcp_mask->hdr.tcp_urp != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested TCP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (tcp_mask->hdr.src_port || tcp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ sw_data[0] =
+ ((ntohs(tcp_spec->hdr.src_port) << 16) |
+ ntohs(tcp_spec->hdr.dst_port)) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(tcp_spec->hdr.src_port)
+ << 16) | ntohs(tcp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(tcp_mask->hdr.src_port)
+ << 16) | ntohs(tcp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_TCP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_TCP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1055,6 +1155,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
switch (fd->l4_prot) {
+ case PROT_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_L4_UDP:
fh->flm_prot = 17;
break;
@@ -1065,6 +1169,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
default:
switch (fd->tunnel_l4_prot) {
+ case PROT_TUN_L4_TCP:
+ fh->flm_prot = 6;
+ break;
+
case PROT_TUN_L4_UDP:
fh->flm_prot = 17;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 21/80] net/ntnic: add action VLAN
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (19 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 20/80] net/ntnic: add action TCP Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 22/80] net/ntnic: add item SCTP Serhii Iliushyk
` (58 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_VLAN.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 95 +++++++++++++++++++
4 files changed, 98 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e3c3982895..8b4821d6d0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -21,6 +21,7 @@ ipv4 = Y
port_id = Y
tcp = Y
udp = Y
+vlan = Y
[rte_flow actions]
drop = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index d43706b2ee..f2ce941fe9 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -46,6 +46,7 @@ Features
- Scattered and gather for TX and RX.
- Jumbo frame support.
- Traffic mirroring.
+- VLAN filtering.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index a1aa74caf5..82ac3d0ff3 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -134,6 +134,7 @@ static inline int is_non_zero(const void *addr, size_t n)
enum frame_offs_e {
DYN_L2 = 1,
+ DYN_FIRST_VLAN = 2,
DYN_L3 = 4,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 4c3844e9b8..627d32047b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -504,6 +504,20 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
return -1;
}
+ if (implicit_vlan_vid > 0) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = 0x0fff;
+ sw_data[0] = implicit_vlan_vid & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1, DYN_FIRST_VLAN, 0);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN, 0);
+ sw_counter += 1;
+
+ fd->vlans += 1;
+ }
+
int qw_reserved_mac = 0;
int qw_reserved_ipv6 = 0;
@@ -664,6 +678,87 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_VLAN",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_vlan_hdr *vlan_spec =
+ (const struct rte_vlan_hdr *)elem[eidx].spec;
+ const struct rte_vlan_hdr *vlan_mask =
+ (const struct rte_vlan_hdr *)elem[eidx].mask;
+
+ if (vlan_spec == NULL || vlan_mask == NULL) {
+ fd->vlans += 1;
+ break;
+ }
+
+ if (!vlan_mask->vlan_tci && !vlan_mask->eth_proto)
+ break;
+
+ if (implicit_vlan_vid > 0) {
+ NT_LOG(ERR, FILTER,
+ "Multiple VLANs not supported for implicit VLAN patterns.");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM,
+ error);
+ return -1;
+ }
+
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ sw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0], 1,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_sw(key_def, sw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = ntohs(vlan_spec->vlan_tci) << 16 |
+ ntohs(vlan_spec->eth_proto);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohs(vlan_mask->vlan_tci) << 16 |
+ ntohs(vlan_mask->eth_proto);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ DYN_FIRST_VLAN, 2 + 4 * fd->vlans);
+ set_key_def_qw(key_def, qw_counter, DYN_FIRST_VLAN,
+ 2 + 4 * fd->vlans);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ fd->vlans += 1;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_IPV4:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV4",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 22/80] net/ntnic: add item SCTP
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (20 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 21/80] net/ntnic: add action VLAN Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 23/80] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
` (57 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ITEM_TYPE_SCTP.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 102 ++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 8b4821d6d0..6691b6dce2 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -19,6 +19,7 @@ eth = Y
icmp = Y
ipv4 = Y
port_id = Y
+sctp = Y
tcp = Y
udp = Y
vlan = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 82ac3d0ff3..f1c57fa9fc 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -169,6 +169,7 @@ enum {
enum {
PROT_L4_TCP = 1,
PROT_L4_UDP = 2,
+ PROT_L4_SCTP = 3,
PROT_L4_ICMP = 4
};
@@ -181,6 +182,7 @@ enum {
PROT_TUN_L4_OTHER = 0,
PROT_TUN_L4_TCP = 1,
PROT_TUN_L4_UDP = 2,
+ PROT_TUN_L4_SCTP = 3,
PROT_TUN_L4_ICMP = 4
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 627d32047b..26e6ee430c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -1017,6 +1017,100 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ NT_LOG(DBG, FILTER, "Adap %i,Port %i:RTE_FLOW_ITEM_TYPE_SCTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_sctp *sctp_spec =
+ (const struct rte_flow_item_sctp *)elem[eidx].spec;
+ const struct rte_flow_item_sctp *sctp_mask =
+ (const struct rte_flow_item_sctp *)elem[eidx].mask;
+
+ if (sctp_spec == NULL || sctp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (sctp_mask->hdr.tag != 0 || sctp_mask->hdr.cksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested SCTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (sctp_mask->hdr.src_port || sctp_mask->hdr.dst_port) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ sw_data[0] = ((ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port)) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = (ntohs(sctp_spec->hdr.src_port)
+ << 16) | ntohs(sctp_spec->hdr.dst_port);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = (ntohs(sctp_mask->hdr.src_port)
+ << 16) | ntohs(sctp_mask->hdr.dst_port);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_SCTP;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_SCTP;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_ICMP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP",
dev->ndev->adapter_no, dev->port);
@@ -1258,6 +1352,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_L4_ICMP:
fh->flm_prot = fd->ip_prot;
break;
@@ -1272,6 +1370,10 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_prot = 17;
break;
+ case PROT_TUN_L4_SCTP:
+ fh->flm_prot = 132;
+ break;
+
case PROT_TUN_L4_ICMP:
fh->flm_prot = fd->tunnel_ip_prot;
break;
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 23/80] net/ntnic: add items IPv6 and ICMPv6
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (21 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 22/80] net/ntnic: add item SCTP Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 24/80] net/ntnic: add action modify filed Serhii Iliushyk
` (56 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_IPV6
* RTE_FLOW_ITEM_TYPE_ICMP6
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 2 +
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 27 ++
.../profile_inline/flow_api_profile_inline.c | 272 ++++++++++++++++++
4 files changed, 303 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 6691b6dce2..320d3c7e0b 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -17,7 +17,9 @@ x86-64 = Y
any = Y
eth = Y
icmp = Y
+icmp6 = Y
ipv4 = Y
+ipv6 = Y
port_id = Y
sctp = Y
tcp = Y
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index f1c57fa9fc..4f381bc0ef 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -164,6 +164,7 @@ enum {
enum {
PROT_L3_IPV4 = 1,
+ PROT_L3_IPV6 = 2
};
enum {
@@ -176,6 +177,7 @@ enum {
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
+ PROT_TUN_L3_IPV6 = 2
};
enum {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index a9016238d0..4bd68c572b 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -45,6 +45,33 @@ static const struct {
} err_msg[] = {
/* 00 */ { "Operation successfully completed" },
/* 01 */ { "Operation failed" },
+ /* 02 */ { "Memory allocation failed" },
+ /* 03 */ { "Too many output destinations" },
+ /* 04 */ { "Too many output queues for RSS" },
+ /* 05 */ { "The VLAN TPID specified is not supported" },
+ /* 06 */ { "The VxLan Push header specified is not accepted" },
+ /* 07 */ { "While interpreting VxLan Pop action, could not find a destination port" },
+ /* 08 */ { "Failed in creating a HW-internal VTEP port" },
+ /* 09 */ { "Too many VLAN tag matches" },
+ /* 10 */ { "IPv6 invalid header specified" },
+ /* 11 */ { "Too many tunnel ports. HW limit reached" },
+ /* 12 */ { "Unknown or unsupported flow match element received" },
+ /* 13 */ { "Match failed because of HW limitations" },
+ /* 14 */ { "Match failed because of HW resource limitations" },
+ /* 15 */ { "Match failed because of too complex element definitions" },
+ /* 16 */ { "Action failed. To too many output destinations" },
+ /* 17 */ { "Action Output failed, due to HW resource exhaustion" },
+ /* 18 */ { "Push Tunnel Header action cannot output to multiple destination queues" },
+ /* 19 */ { "Inline action HW resource exhaustion" },
+ /* 20 */ { "Action retransmit/recirculate HW resource exhaustion" },
+ /* 21 */ { "Flow counter HW resource exhaustion" },
+ /* 22 */ { "Internal HW resource exhaustion to handle Actions" },
+ /* 23 */ { "Internal HW QSL compare failed" },
+ /* 24 */ { "Internal CAT CFN reuse failed" },
+ /* 25 */ { "Match variations too complex" },
+ /* 26 */ { "Match failed because of CAM/TCAM full" },
+ /* 27 */ { "Internal creation of a tunnel end point port failed" },
+ /* 28 */ { "Unknown or unsupported flow action received" },
/* 29 */ { "Removing flow failed" },
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 26e6ee430c..f7a5d42912 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -538,6 +538,22 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
}
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6: {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec != NULL && ipv6_mask != NULL) {
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16))
+ qw_reserved_ipv6 += 1;
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16))
+ qw_reserved_ipv6 += 1;
+ }
+ }
+ break;
+
default:
break;
}
@@ -922,6 +938,163 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_IPV6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_ipv6 *ipv6_spec =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask =
+ (const struct rte_flow_item_ipv6 *)elem[eidx].mask;
+
+ if (ipv6_spec == NULL || ipv6_mask == NULL) {
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ break;
+ }
+
+ if (ipv6_mask->hdr.vtc_flow != 0 ||
+ ipv6_mask->hdr.payload_len != 0 ||
+ ipv6_mask->hdr.hop_limits != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested IPv6 field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.src_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.src_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.src_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 8);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 8);
+ qw_counter += 1;
+ }
+
+ if (is_non_zero(&ipv6_spec->hdr.dst_addr, 16)) {
+ if (qw_counter >= 2) {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ uint32_t *qw_data = &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask = &packet_mask[2 + 4 - qw_counter * 4];
+
+ memcpy(&qw_data[0], &ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&qw_mask[0], &ipv6_mask->hdr.dst_addr, 16);
+
+ qw_data[0] = ntohl(qw_data[0]);
+ qw_data[1] = ntohl(qw_data[1]);
+ qw_data[2] = ntohl(qw_data[2]);
+ qw_data[3] = ntohl(qw_data[3]);
+
+ qw_mask[0] = ntohl(qw_mask[0]);
+ qw_mask[1] = ntohl(qw_mask[1]);
+ qw_mask[2] = ntohl(qw_mask[2]);
+ qw_mask[3] = ntohl(qw_mask[3]);
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0], 4,
+ any_count > 0 ? DYN_TUN_L3 : DYN_L3, 24);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 24);
+ qw_counter += 1;
+ }
+
+ if (ipv6_mask->hdr.proto != 0) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ipv6_mask->hdr.proto << 8;
+ sw_data[0] = ipv6_spec->hdr.proto << 8 & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 4);
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = 0;
+ qw_data[1] = ipv6_mask->hdr.proto << 8;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = 0;
+ qw_mask[1] = ipv6_spec->hdr.proto << 8;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L3 : DYN_L3, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L3 : DYN_L3, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l3_prot != -1)
+ fd->tunnel_l3_prot = PROT_TUN_L3_IPV6;
+
+ else
+ fd->l3_prot = PROT_L3_IPV6;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_UDP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_UDP",
dev->ndev->adapter_no, dev->port);
@@ -1212,6 +1385,105 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_ICMP6",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_flow_item_icmp6 *icmp_spec =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].spec;
+ const struct rte_flow_item_icmp6 *icmp_mask =
+ (const struct rte_flow_item_icmp6 *)elem[eidx].mask;
+
+ if (icmp_spec == NULL || icmp_mask == NULL) {
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ break;
+ }
+
+ if (icmp_mask->checksum != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested ICMP6 field not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (icmp_mask->type || icmp_mask->code) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data = &packet_data[1 - sw_counter];
+ uint32_t *sw_mask = &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ sw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ sw_data[0] &= sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0], &sw_mask[0],
+ 1, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+
+ set_key_def_sw(key_def, sw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 - qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 - qw_counter * 4];
+
+ qw_data[0] = icmp_spec->type << 24 |
+ icmp_spec->code << 16;
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = icmp_mask->type << 24 |
+ icmp_mask->code << 16;
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0], &qw_mask[0],
+ 4, any_count > 0 ? DYN_TUN_L4 : DYN_L4, 0);
+ set_key_def_qw(key_def, qw_counter, any_count > 0
+ ? DYN_TUN_L4 : DYN_L4, 0);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ if (any_count > 0 || fd->l4_prot != -1) {
+ fd->tunnel_l4_prot = PROT_TUN_L4_ICMP;
+ fd->tunnel_ip_prot = 58;
+ key_def->inner_proto = 1;
+
+ } else {
+ fd->l4_prot = PROT_L4_ICMP;
+ fd->ip_prot = 58;
+ key_def->outer_proto = 1;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_TCP:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_TCP",
dev->ndev->adapter_no, dev->port);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 24/80] net/ntnic: add action modify filed
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (22 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 23/80] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 25/80] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
` (55 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use RTE_FLOW_ACTION_TYPE_MODIFY_FIELD.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 +
drivers/net/ntnic/include/hw_mod_backend.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 181 ++++++++++++++++++
5 files changed, 191 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 320d3c7e0b..4201c8e8b9 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -30,5 +30,6 @@ vlan = Y
drop = Y
jump = Y
mark = Y
+modify_field = Y
port_id = Y
queue = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index f2ce941fe9..63ad4d95f5 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -47,6 +47,7 @@ Features
- Jumbo frame support.
- Traffic mirroring.
- VLAN filtering.
+- Packet modification: NAT, TTL decrement, DSCP tagging
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 13fad2760a..f6557d0d20 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,10 @@ struct nic_flow_def {
*/
struct {
uint32_t select;
+ uint32_t dyn;
+ uint32_t ofs;
+ uint32_t len;
+ uint32_t level;
union {
uint8_t value8[16];
uint16_t value16[8];
@@ -137,6 +141,9 @@ struct nic_flow_def {
} modify_field[MAX_CPY_WRITERS_SUPPORTED];
uint32_t modify_field_count;
+ uint8_t ttl_sub_enable;
+ uint8_t ttl_sub_ipv4;
+ uint8_t ttl_sub_outer;
/*
* Key Matcher flow definitions
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 4f381bc0ef..6a8a38636f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -140,6 +140,7 @@ enum frame_offs_e {
DYN_L4_PAYLOAD = 8,
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
+ DYN_TUN_L4_PAYLOAD = 17,
};
/* Sideband info bit indicator */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f7a5d42912..4cadd3169b 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -323,6 +323,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
{
unsigned int encap_decap_order = 0;
+ uint64_t modify_field_use_flags = 0x0;
+
*num_dest_port = 0;
*num_queues = 0;
@@ -461,6 +463,185 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
+ {
+ /* Note: This copy method will not work for FLOW_FIELD_POINTER */
+ struct rte_flow_action_modify_field modify_field_tmp;
+ const struct rte_flow_action_modify_field *modify_field =
+ memcpy_mask_if(&modify_field_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_modify_field));
+
+ uint64_t modify_field_use_flag = 0;
+
+ if (modify_field->src.field != RTE_FLOW_FIELD_VALUE) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only src type VALUE is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.level > 2) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only dst level 0, 1, and 2 is supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL ||
+ modify_field->dst.field == RTE_FLOW_FIELD_IPV6_HOPLIMIT) {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SUB) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SUB is supported for TTL/HOPLIMIT.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->ttl_sub_enable) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD TTL/HOPLIMIT resource already in use.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->ttl_sub_enable = 1;
+ fd->ttl_sub_ipv4 =
+ (modify_field->dst.field == RTE_FLOW_FIELD_IPV4_TTL)
+ ? 1
+ : 0;
+ fd->ttl_sub_outer = (modify_field->dst.level <= 1) ? 1 : 0;
+
+ } else {
+ if (modify_field->operation != RTE_FLOW_MODIFY_SET) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD only operation SET is supported in general.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (fd->modify_field_count >=
+ dev->ndev->be.tpe.nb_cpy_writers) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD exceeded maximum of %u MODIFY_FIELD actions.",
+ dev->ndev->be.tpe.nb_cpy_writers);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ int mod_outer = modify_field->dst.level <= 1;
+
+ switch (modify_field->dst.field) {
+ case RTE_FLOW_FIELD_IPV4_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 1;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV6_DSCP:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_DSCP_IPV6;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ /*
+ * len=2 is needed because
+ * IPv6 DSCP overlaps 2 bytes.
+ */
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_PSC_QFI:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_RQI_QFI;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 14;
+ fd->modify_field[fd->modify_field_count].len = 1;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 12;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_IPV4_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_IPV4;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L3 : DYN_TUN_L3;
+ fd->modify_field[fd->modify_field_count].ofs = 16;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_SRC:
+ case RTE_FLOW_FIELD_UDP_PORT_SRC:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 0;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_TCP_PORT_DST:
+ case RTE_FLOW_FIELD_UDP_PORT_DST:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_PORT;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4 : DYN_TUN_L4;
+ fd->modify_field[fd->modify_field_count].ofs = 2;
+ fd->modify_field[fd->modify_field_count].len = 2;
+ break;
+
+ case RTE_FLOW_FIELD_GTP_TEID:
+ fd->modify_field[fd->modify_field_count].select =
+ CPY_SELECT_TEID;
+ fd->modify_field[fd->modify_field_count].dyn =
+ mod_outer ? DYN_L4_PAYLOAD
+ : DYN_TUN_L4_PAYLOAD;
+ fd->modify_field[fd->modify_field_count].ofs = 4;
+ fd->modify_field[fd->modify_field_count].len = 4;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type is not supported.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ modify_field_use_flag = 1
+ << fd->modify_field[fd->modify_field_count].select;
+
+ if (modify_field_use_flag & modify_field_use_flags) {
+ NT_LOG(ERR, FILTER,
+ "MODIFY_FIELD dst type hardware resource already used.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ memcpy(fd->modify_field[fd->modify_field_count].value8,
+ modify_field->src.value, 16);
+
+ fd->modify_field[fd->modify_field_count].level =
+ modify_field->dst.level;
+
+ modify_field_use_flags |= modify_field_use_flag;
+ fd->modify_field_count += 1;
+ }
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 25/80] net/ntnic: add items gtp and actions raw encap/decap
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (23 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 24/80] net/ntnic: add action modify filed Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 26/80] net/ntnic: add cat module Serhii Iliushyk
` (54 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add possibility to use
* RTE_FLOW_ITEM_TYPE_GTP
* RTE_FLOW_ITEM_TYPE_GTP_PSC
* RTE_FLOW_ACTION_TYPE_RAW_ENCAP
* RTE_FLOW_ACTION_TYPE_RAW_DECAP
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 4 +
doc/guides/nics/ntnic.rst | 4 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/flow_api_engine.h | 40 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/include/stream_binary_flow_api.h | 22 ++
.../profile_inline/flow_api_profile_inline.c | 366 +++++++++++++++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 291 +++++++++++++-
8 files changed, 730 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4201c8e8b9..4cb9509742 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -16,6 +16,8 @@ x86-64 = Y
[rte_flow items]
any = Y
eth = Y
+gtp = Y
+gtp_psc = Y
icmp = Y
icmp6 = Y
ipv4 = Y
@@ -33,3 +35,5 @@ mark = Y
modify_field = Y
port_id = Y
queue = Y
+raw_decap = Y
+raw_encap = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 63ad4d95f5..cd7d315456 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -48,6 +48,10 @@ Features
- Traffic mirroring.
- VLAN filtering.
- Packet modification: NAT, TTL decrement, DSCP tagging
+- Tunnel types: GTP.
+- Encapsulation and decapsulation of GTP data.
+- RX VLAN stripping via raw decap.
+- TX VLAN insertion via raw encap.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 179542d2b2..70e6cad195 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,8 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct flow_action_raw_encap encap;
+ struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
};
@@ -52,6 +54,8 @@ enum nt_rte_flow_item_type {
};
extern rte_spinlock_t flow_lock;
+
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out);
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error);
int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr);
int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[],
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index f6557d0d20..b1d39b919b 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -56,6 +56,29 @@ enum res_type_e {
#define MAX_MATCH_FIELDS 16
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+ uint32_t user_port_id;
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+ uint16_t ip_csum_precalc;
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
+};
+
struct match_elem_s {
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
@@ -124,6 +147,23 @@ struct nic_flow_def {
int full_offload;
+ /*
+ * Action push tunnel
+ */
+ struct tunnel_header_s tun_hdr;
+
+ /*
+ * If DPDK RTE tunnel helper API used
+ * this holds the tunnel if used in flow
+ */
+ struct tunnel_s *tnl;
+
+ /*
+ * Header Stripper
+ */
+ int header_strip_end_dyn;
+ int header_strip_end_ofs;
+
/*
* Modify field
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6a8a38636f..1b45ea4296 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -175,6 +175,10 @@ enum {
PROT_L4_ICMP = 4
};
+enum {
+ PROT_TUN_GTPV1U = 6,
+};
+
enum {
PROT_TUN_L3_OTHER = 0,
PROT_TUN_L3_IPV4 = 1,
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index d878b848c2..8097518d61 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -18,6 +18,7 @@
#define FLOW_MAX_QUEUES 128
+#define RAW_ENCAP_DECAP_ELEMS_MAX 16
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
@@ -31,6 +32,27 @@ struct flow_queue_id_s {
int hw_id;
};
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ */
+struct flow_action_raw_encap {
+ uint8_t *data;
+ uint8_t *preserve;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
+/*
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ */
+struct flow_action_raw_decap {
+ uint8_t *data;
+ size_t size;
+ struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX];
+ int item_count;
+};
+
struct flow_eth_dev; /* port device */
struct flow_handle;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 4cadd3169b..7b932c7cc5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -463,6 +463,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
+
+ if (action[aidx].conf) {
+ const struct flow_action_raw_encap *encap =
+ (const struct flow_action_raw_encap *)action[aidx].conf;
+ const struct flow_action_raw_encap *encap_mask = action_mask
+ ? (const struct flow_action_raw_encap *)action_mask[aidx]
+ .conf
+ : NULL;
+ const struct rte_flow_item *items = encap->items;
+
+ if (encap_decap_order != 1) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (encap->size == 0 || encap->size > 255 ||
+ encap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP data/size invalid.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 2;
+
+ fd->tun_hdr.len = (uint8_t)encap->size;
+
+ if (encap_mask) {
+ memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data,
+ encap_mask->data, fd->tun_hdr.len);
+
+ } else {
+ memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len);
+ }
+
+ while (items->type != RTE_FLOW_ITEM_TYPE_END) {
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ fd->tun_hdr.l2_len = 14;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->tun_hdr.nb_vlans += 1;
+ fd->tun_hdr.l2_len += 4;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ fd->tun_hdr.ip_version = 4;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->tun_hdr.ip_version = 6;
+ fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr);
+ fd->tun_hdr.new_outer = 1;
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr);
+
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 4] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len + 5] = 0xfd;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->tun_hdr.l4_len =
+ sizeof(struct rte_flow_item_icmp6);
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ /* Patch length */
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 2] = 0x07;
+ fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len +
+ fd->tun_hdr.l3_len +
+ fd->tun_hdr.l4_len + 3] = 0xfd;
+ break;
+
+ default:
+ break;
+ }
+
+ items++;
+ }
+
+ if (fd->tun_hdr.nb_vlans > 3) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Encapsulation with %d vlans not supported.",
+ (int)fd->tun_hdr.nb_vlans);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ /* Convert encap data to 128-bit little endian */
+ for (size_t i = 0; i < (encap->size + 15) / 16; ++i) {
+ uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16;
+
+ for (unsigned int j = 0; j < 8; ++j) {
+ uint8_t t = data[j];
+ data[j] = data[15 - j];
+ data[15 - j] = t;
+ }
+ }
+ }
+
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev);
+
+ if (action[aidx].conf) {
+ /* Mask is N/A for RAW_DECAP */
+ const struct flow_action_raw_decap *decap =
+ (const struct flow_action_raw_decap *)action[aidx].conf;
+
+ if (encap_decap_order != 0) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_ENCAP must follow RAW_DECAP.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ if (decap->item_count < 2) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - RAW_DECAP must decap something.");
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ encap_decap_order = 1;
+
+ switch (decap->items[decap->item_count - 2].type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ fd->header_strip_end_dyn = DYN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ fd->header_strip_end_dyn = DYN_L4;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_SCTP:
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ case RTE_FLOW_ITEM_TYPE_ICMP:
+ case RTE_FLOW_ITEM_TYPE_ICMP6:
+ fd->header_strip_end_dyn = DYN_L4_PAYLOAD;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ fd->header_strip_end_dyn = DYN_TUN_L3;
+ fd->header_strip_end_ofs = 0;
+ break;
+
+ default:
+ fd->header_strip_end_dyn = DYN_L2;
+ fd->header_strip_end_ofs = 0;
+ break;
+ }
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev);
{
@@ -1765,6 +1961,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_hdr *gtp_spec =
+ (const struct rte_gtp_hdr *)elem[eidx].spec;
+ const struct rte_gtp_hdr *gtp_mask =
+ (const struct rte_gtp_hdr *)elem[eidx].mask;
+
+ if (gtp_spec == NULL || gtp_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_mask->gtp_hdr_info != 0 ||
+ gtp_mask->msg_type != 0 || gtp_mask->plen != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP field not support by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_mask->teid) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_mask->teid);
+ sw_data[0] =
+ ntohl(gtp_spec->teid) & sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 4);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_spec->teid);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_mask->teid);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 4);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 4);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
+ case RTE_FLOW_ITEM_TYPE_GTP_PSC:
+ NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC",
+ dev->ndev->adapter_no, dev->port);
+ {
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_spec =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec;
+ const struct rte_gtp_psc_generic_hdr *gtp_psc_mask =
+ (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask;
+
+ if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) {
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ break;
+ }
+
+ if (gtp_psc_mask->type != 0 ||
+ gtp_psc_mask->ext_hdr_len != 0) {
+ NT_LOG(ERR, FILTER,
+ "Requested GTP PSC field is not supported by running SW version");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+
+ if (gtp_psc_mask->qfi) {
+ if (sw_counter < 2) {
+ uint32_t *sw_data =
+ &packet_data[1 - sw_counter];
+ uint32_t *sw_mask =
+ &packet_mask[1 - sw_counter];
+
+ sw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ sw_data[0] = ntohl(gtp_psc_spec->qfi) &
+ sw_mask[0];
+
+ km_add_match_elem(&fd->km, &sw_data[0],
+ &sw_mask[0], 1,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_sw(key_def, sw_counter,
+ DYN_L4_PAYLOAD, 14);
+ sw_counter += 1;
+
+ } else if (qw_counter < 2 && qw_free > 0) {
+ uint32_t *qw_data =
+ &packet_data[2 + 4 -
+ qw_counter * 4];
+ uint32_t *qw_mask =
+ &packet_mask[2 + 4 -
+ qw_counter * 4];
+
+ qw_data[0] = ntohl(gtp_psc_spec->qfi);
+ qw_data[1] = 0;
+ qw_data[2] = 0;
+ qw_data[3] = 0;
+
+ qw_mask[0] = ntohl(gtp_psc_mask->qfi);
+ qw_mask[1] = 0;
+ qw_mask[2] = 0;
+ qw_mask[3] = 0;
+
+ qw_data[0] &= qw_mask[0];
+ qw_data[1] &= qw_mask[1];
+ qw_data[2] &= qw_mask[2];
+ qw_data[3] &= qw_mask[3];
+
+ km_add_match_elem(&fd->km, &qw_data[0],
+ &qw_mask[0], 4,
+ DYN_L4_PAYLOAD, 14);
+ set_key_def_qw(key_def, qw_counter,
+ DYN_L4_PAYLOAD, 14);
+ qw_counter += 1;
+ qw_free -= 1;
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "Key size too big. Out of SW-QW resources.");
+ flow_nic_set_error(ERR_FAILED, error);
+ return -1;
+ }
+ }
+
+ fd->tunnel_prot = PROT_TUN_GTPV1U;
+ }
+
+ break;
+
case RTE_FLOW_ITEM_TYPE_PORT_ID:
NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID",
dev->ndev->adapter_no, dev->port);
@@ -1928,7 +2292,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
- uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused,
+ uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index b9d723c9dd..20b5cb2835 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -16,6 +16,224 @@
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out)
+{
+ int hdri = 0;
+ int pkti = 0;
+
+ /* Ethernet */
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_ether_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ether_hdr);
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* VLAN */
+ while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) ||
+ ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) {
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ if (size - pkti < (int)sizeof(struct rte_vlan_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_vlan_hdr);
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 3 */
+ uint8_t next_header = 0;
+
+ if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) {
+ if (size - pkti < (int)sizeof(struct rte_ipv4_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 9];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv4_hdr);
+
+ } else if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6) &&
+ (data[pkti] & 0xF0) == 0x60) {
+ if (size - pkti < (int)sizeof(struct rte_ipv6_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_header = data[pkti + 6];
+
+ hdri += 1;
+ pkti += sizeof(struct rte_ipv6_hdr);
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* Layer 4 */
+ int gtpu_encap = 0;
+
+ if (next_header == 1) { /* ICMP */
+ if (size - pkti < (int)sizeof(struct rte_icmp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 58) { /* ICMP6 */
+ if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_icmp_hdr);
+
+ } else if (next_header == 6) { /* TCP */
+ if (size - pkti < (int)sizeof(struct rte_tcp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_tcp_hdr);
+
+ } else if (next_header == 17) { /* UDP */
+ if (size - pkti < (int)sizeof(struct rte_udp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port ==
+ rte_cpu_to_be_16(RTE_GTPU_UDP_PORT);
+
+ hdri += 1;
+ pkti += sizeof(struct rte_udp_hdr);
+
+ } else if (next_header == 132) {/* SCTP */
+ if (size - pkti < (int)sizeof(struct rte_sctp_hdr))
+ return -1;
+
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP;
+ out[hdri].spec = &data[pkti];
+ out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_sctp_hdr);
+
+ } else {
+ return -1;
+ }
+
+ if (size - pkti == 0)
+ goto interpret_end;
+
+ /* GTPv1-U */
+ if (gtpu_encap) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti])
+ ->e;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr);
+
+ if (extension_present_bit) {
+ if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word))
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti])
+ ->next_ext;
+
+ hdri += 1;
+ pkti += sizeof(struct rte_gtp_hdr_ext_word);
+
+ while (next_ext) {
+ size_t ext_len = data[pkti] * 4;
+
+ if (size - pkti < (int)ext_len)
+ return -1;
+
+ out[hdri]
+ .type = RTE_FLOW_ITEM_TYPE_GTP;
+ out[hdri]
+ .spec = &data[pkti];
+ out[hdri]
+ .mask = (preserve != NULL) ? &preserve[pkti] : NULL;
+
+ next_ext = data[pkti + ext_len - 1];
+
+ hdri += 1;
+ pkti += ext_len;
+ }
+ }
+ }
+
+ if (size - pkti != 0)
+ return -1;
+
+interpret_end:
+ out[hdri].type = RTE_FLOW_ITEM_TYPE_END;
+ out[hdri].spec = NULL;
+ out[hdri].mask = NULL;
+
+ return hdri + 1;
+}
+
int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error)
{
if (error) {
@@ -95,13 +313,78 @@ int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item
return (type >= 0) ? 0 : -1;
}
-int create_action_elements_inline(struct cnv_action_s *action __rte_unused,
- const struct rte_flow_action actions[] __rte_unused,
- int max_elem __rte_unused,
- uint32_t queue_offset __rte_unused)
+int create_action_elements_inline(struct cnv_action_s *action,
+ const struct rte_flow_action actions[],
+ int max_elem,
+ uint32_t queue_offset)
{
+ int aidx = 0;
int type = -1;
+ do {
+ type = actions[aidx].type;
+ if (type >= 0) {
+ action->flow_actions[aidx].type = type;
+
+ /*
+ * Non-compatible actions handled here
+ */
+ switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
+ const struct rte_flow_action_raw_decap *decap =
+ (const struct rte_flow_action_raw_decap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(decap->data, NULL, decap->size,
+ action->decap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->decap.data = decap->data;
+ action->decap.size = decap->size;
+ action->decap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->decap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: {
+ const struct rte_flow_action_raw_encap *encap =
+ (const struct rte_flow_action_raw_encap *)actions[aidx]
+ .conf;
+ int item_count = interpret_raw_data(encap->data, encap->preserve,
+ encap->size, action->encap.items);
+
+ if (item_count < 0)
+ return item_count;
+ action->encap.data = encap->data;
+ action->encap.preserve = encap->preserve;
+ action->encap.size = encap->size;
+ action->encap.item_count = item_count;
+ action->flow_actions[aidx].conf = &action->encap;
+ }
+ break;
+
+ case RTE_FLOW_ACTION_TYPE_QUEUE: {
+ const struct rte_flow_action_queue *queue =
+ (const struct rte_flow_action_queue *)actions[aidx].conf;
+ action->queue.index = queue->index + queue_offset;
+ action->flow_actions[aidx].conf = &action->queue;
+ }
+ break;
+
+ default: {
+ action->flow_actions[aidx].conf = actions[aidx].conf;
+ }
+ break;
+ }
+
+ aidx++;
+
+ if (aidx == max_elem)
+ return -1;
+ }
+
+ } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END);
+
return (type >= 0) ? 0 : -1;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 26/80] net/ntnic: add cat module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (24 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 25/80] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 27/80] net/ntnic: add SLC LR module Serhii Iliushyk
` (53 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Categorizer module’s main purpose is to is select the behavior
of other modules in the FPGA pipeline depending on a protocol check.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 24 ++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 267 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 165 +++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 47 +++
.../profile_inline/flow_api_profile_inline.c | 83 ++++++
5 files changed, 586 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 1b45ea4296..87fc16ecb4 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -315,11 +315,35 @@ int hw_mod_cat_reset(struct flow_api_backend_s *be);
int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
+/* KCE/KCS/FTE KM */
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+/* KCE/KCS/FTE FLM */
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value);
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count);
+
int hw_mod_cat_kcc_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_exo_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index d266760123..9164ec1ae0 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -951,6 +951,97 @@ static int hw_mod_cat_fte_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_fte_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_fte_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_fte_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ const uint32_t key_cnt = (_VER_ >= 20) ? 4 : 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8 * be->cat.nb_flow_types * key_cnt)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v18.fte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_FTE_ENABLE_BM:
+ GET_SET(be->cat.v21.fte[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_fte_mod(be, field, if_num, 1, index, value, 1);
+}
+
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -964,6 +1055,45 @@ int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cte_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cte_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTE_ENABLE_BM:
+ GET_SET(be->cat.v18.cte[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -979,6 +1109,51 @@ int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cts_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cts_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ int addr_size = (be->cat.cts_num + 1) / 2;
+
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs * addr_size)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_CTS_CAT_A:
+ GET_SET(be->cat.v18.cts[index].cat_a, value);
+ break;
+
+ case HW_CAT_CTS_CAT_B:
+ GET_SET(be->cat.v18.cts[index].cat_b, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -992,6 +1167,98 @@ int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->cat_cot_flush(be->be_dev, &be->cat, start_idx, count);
}
+static int hw_mod_cat_cot_mod(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 18:
+ case 21:
+ switch (field) {
+ case HW_CAT_COT_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->cat.v18.cot[index], (uint8_t)*value,
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->cat.v18.cot, struct cat_v18_cot_s, index, *value);
+ break;
+
+ case HW_CAT_COT_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->cat.v18.cot, struct cat_v18_cot_s, index, *value,
+ be->max_categories);
+ break;
+
+ case HW_CAT_COT_COPY_FROM:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memcpy(&be->cat.v18.cot[index], &be->cat.v18.cot[*value],
+ sizeof(struct cat_v18_cot_s));
+ break;
+
+ case HW_CAT_COT_COLOR:
+ GET_SET(be->cat.v18.cot[index].color, value);
+ break;
+
+ case HW_CAT_COT_KM:
+ GET_SET(be->cat.v18.cot[index].km, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18/21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_cat_cot_mod(be, field, index, &value, 0);
+}
+
int hw_mod_cat_cct_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4ea9387c80..addd5f288f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -22,6 +22,14 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
+ /* Items */
+ struct hw_db_inline_resource_db_cat {
+ struct hw_db_inline_cat_data data;
+ int ref;
+ } *cat;
+
+ uint32_t nb_cat;
+
/* Hardware */
struct hw_db_inline_resource_db_cfn {
@@ -47,6 +55,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_cat = ndev->be.cat.nb_cat_funcs;
+ db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
+
+ if (db->cat == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -56,6 +72,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->cat);
free(db->cfn);
@@ -70,6 +87,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_CAT:
+ hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_COT:
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
@@ -80,6 +101,69 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+/******************************************************************************/
+/* Filter */
+/******************************************************************************/
+
+/*
+ * Setup a filter to match:
+ * All packets in CFN checks
+ * All packets in KM
+ * All packets in FLM with look-up C FT equal to specified argument
+ *
+ * Setup a QSL recipe to DROP all matching packets
+ *
+ * Note: QSL recipe 0 uses DISCARD in order to allow for exception paths (UNMQ)
+ * Consequently another QSL recipe with hard DROP is needed
+ */
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id)
+{
+ (void)ft;
+ (void)qsl_hw_id;
+
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+ (void)offset;
+
+ /* Select and enable QSL recipe */
+ if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
+ return -1;
+
+ if (hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6))
+ return -1;
+
+ if (hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0x8))
+ return -1;
+
+ if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ /* Make all CFN checks TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, 0x0))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x1))
+ return -1;
+
+ /* Final match: look-up_A == TRUE && look-up_C == TRUE */
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1))
+ return -1;
+
+ if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3))
+ return -1;
+
+ if (hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1))
+ return -1;
+
+ return 0;
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -150,3 +234,84 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
db->cot[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* CAT */
+/******************************************************************************/
+
+static int hw_db_inline_cat_compare(const struct hw_db_inline_cat_data *data1,
+ const struct hw_db_inline_cat_data *data2)
+{
+ return data1->vlan_mask == data2->vlan_mask &&
+ data1->mac_port_mask == data2->mac_port_mask &&
+ data1->ptc_mask_frag == data2->ptc_mask_frag &&
+ data1->ptc_mask_l2 == data2->ptc_mask_l2 &&
+ data1->ptc_mask_l3 == data2->ptc_mask_l3 &&
+ data1->ptc_mask_l4 == data2->ptc_mask_l4 &&
+ data1->ptc_mask_tunnel == data2->ptc_mask_tunnel &&
+ data1->ptc_mask_l3_tunnel == data2->ptc_mask_l3_tunnel &&
+ data1->ptc_mask_l4_tunnel == data2->ptc_mask_l4_tunnel &&
+ data1->err_mask_ttl_tunnel == data2->err_mask_ttl_tunnel &&
+ data1->err_mask_ttl == data2->err_mask_ttl && data1->ip_prot == data2->ip_prot &&
+ data1->ip_prot_tunnel == data2->ip_prot_tunnel;
+}
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_cat_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_CAT;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ int ref = db->cat[i].ref;
+
+ if (ref > 0 && hw_db_inline_cat_compare(data, &db->cat[i].data)) {
+ idx.ids = i;
+ hw_db_inline_cat_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->cat[idx.ids].ref = 1;
+ memcpy(&db->cat[idx.ids].data, data, sizeof(struct hw_db_inline_cat_data));
+
+ return idx;
+}
+
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->cat[idx.ids].ref += 1;
+}
+
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->cat[idx.ids].ref -= 1;
+
+ if (db->cat[idx.ids].ref <= 0) {
+ memset(&db->cat[idx.ids].data, 0x0, sizeof(struct hw_db_inline_cat_data));
+ db->cat[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 0116af015d..38502ac1ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,12 +36,37 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_cat_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
+ HW_DB_IDX_TYPE_CAT,
};
/* Functionality data types */
+struct hw_db_inline_cat_data {
+ uint32_t vlan_mask : 4;
+ uint32_t mac_port_mask : 8;
+ uint32_t ptc_mask_frag : 4;
+ uint32_t ptc_mask_l2 : 7;
+ uint32_t ptc_mask_l3 : 3;
+ uint32_t ptc_mask_l4 : 5;
+ uint32_t padding0 : 1;
+
+ uint32_t ptc_mask_tunnel : 11;
+ uint32_t ptc_mask_l3_tunnel : 3;
+ uint32_t ptc_mask_l4_tunnel : 5;
+ uint32_t err_mask_ttl_tunnel : 2;
+ uint32_t err_mask_ttl : 2;
+ uint32_t padding1 : 9;
+
+ uint8_t ip_prot;
+ uint8_t ip_prot_tunnel;
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -70,6 +95,16 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ };
+ };
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -84,4 +119,16 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+/**/
+
+struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_cat_data *data);
+void hw_db_inline_cat_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cat_idx idx);
+
+/**/
+
+int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
+ uint32_t qsl_hw_id);
+
#endif /* _FLOW_API_HW_DB_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7b932c7cc5..3cfeee2c25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2346,6 +2350,67 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ struct hw_db_inline_action_set_data action_set_data = { 0 };
+ (void)action_set_data;
+
+ if (fd->jump_to_group != UINT32_MAX) {
+ /* Action Set only contains jump */
+ action_set_data.contains_jump = 1;
+ action_set_data.jump = fd->jump_to_group;
+
+ } else {
+ /* Action Set doesn't contain jump */
+ action_set_data.contains_jump = 0;
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = 0,
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
+ &cot_data);
+ fh->db_idxs[fh->db_idx_counter++] = cot_idx.raw;
+ action_set_data.cot = cot_idx;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
+
+ /* Setup CAT */
+ struct hw_db_inline_cat_data cat_data = {
+ .vlan_mask = (0xf << fd->vlans) & 0xf,
+ .mac_port_mask = 1 << fh->port_id,
+ .ptc_mask_frag = fd->fragmentation,
+ .ptc_mask_l2 = fd->l2_prot != -1 ? (1 << fd->l2_prot) : -1,
+ .ptc_mask_l3 = fd->l3_prot != -1 ? (1 << fd->l3_prot) : -1,
+ .ptc_mask_l4 = fd->l4_prot != -1 ? (1 << fd->l4_prot) : -1,
+ .err_mask_ttl = (fd->ttl_sub_enable &&
+ fd->ttl_sub_outer) ? -1 : 0x1,
+ .ptc_mask_tunnel = fd->tunnel_prot !=
+ -1 ? (1 << fd->tunnel_prot) : -1,
+ .ptc_mask_l3_tunnel =
+ fd->tunnel_l3_prot != -1 ? (1 << fd->tunnel_l3_prot) : -1,
+ .ptc_mask_l4_tunnel =
+ fd->tunnel_l4_prot != -1 ? (1 << fd->tunnel_l4_prot) : -1,
+ .err_mask_ttl_tunnel =
+ (fd->ttl_sub_enable && !fd->ttl_sub_outer) ? -1 : 0x1,
+ .ip_prot = fd->ip_prot,
+ .ip_prot_tunnel = fd->tunnel_ip_prot,
+ };
+ struct hw_db_cat_idx cat_idx =
+ hw_db_inline_cat_add(dev->ndev, dev->ndev->hw_db_handle, &cat_data);
+ fh->db_idxs[fh->db_idx_counter++] = cat_idx.raw;
+
+ if (cat_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference CAT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2378,6 +2443,20 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* COT is locked to CFN. Don't set color for CFN 0 */
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+
+ if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ /* Setup filter using matching all packets violating traffic policing parameters */
+ flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+
+ if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE,
+ NT_VIOLATING_MBR_QSL) < 0)
+ goto err_exit0;
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -2412,6 +2491,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PRESET_ALL, 0, 0, 0);
+ hw_mod_cat_cfn_flush(&ndev->be, 0, 1);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
+ hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
hw_mod_tpe_reset(&ndev->be);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 27/80] net/ntnic: add SLC LR module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (25 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 26/80] net/ntnic: add cat module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 28/80] net/ntnic: add PDB module Serhii Iliushyk
` (52 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 2 +
.../nthw/flow_api/hw_mod/hw_mod_slc_lr.c | 100 +++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 104 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 19 ++++
.../profile_inline/flow_api_profile_inline.c | 37 ++++++-
5 files changed, 257 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 87fc16ecb4..2711f44083 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -697,6 +697,8 @@ int hw_mod_slc_lr_alloc(struct flow_api_backend_s *be);
void hw_mod_slc_lr_free(struct flow_api_backend_s *be);
int hw_mod_slc_lr_reset(struct flow_api_backend_s *be);
int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value);
struct pdb_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
index 1d878f3f96..30e5e38690 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_slc_lr.c
@@ -66,3 +66,103 @@ int hw_mod_slc_lr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int co
return be->iface->slc_lr_rcp_flush(be->be_dev, &be->slc_lr, start_idx, count);
}
+
+static int hw_mod_slc_lr_rcp_mod(struct flow_api_backend_s *be, enum hw_slc_lr_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 2:
+ switch (field) {
+ case HW_SLC_LR_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->slc_lr.v2.rcp[index], (uint8_t)*value,
+ sizeof(struct hw_mod_slc_lr_v2_s));
+ break;
+
+ case HW_SLC_LR_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value, be->max_categories);
+ break;
+
+ case HW_SLC_LR_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->max_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->slc_lr.v2.rcp, struct hw_mod_slc_lr_v2_s, index,
+ *value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].head_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_HEAD_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].head_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_SLC_EN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_slc_en, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_DYN:
+ GET_SET(be->slc_lr.v2.rcp[index].tail_dyn, value);
+ break;
+
+ case HW_SLC_LR_RCP_TAIL_OFS:
+ GET_SET_SIGNED(be->slc_lr.v2.rcp[index].tail_ofs, value);
+ break;
+
+ case HW_SLC_LR_RCP_PCAP:
+ GET_SET(be->slc_lr.v2.rcp[index].pcap, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_slc_lr_rcp_set(struct flow_api_backend_s *be, enum hw_slc_lr_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_slc_lr_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index addd5f288f..b17bce3745 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,7 +20,13 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_slc_lr {
+ struct hw_db_inline_slc_lr_data data;
+ int ref;
+ } *slc_lr;
+
uint32_t nb_cot;
+ uint32_t nb_slc_lr;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -55,6 +61,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_slc_lr = ndev->be.max_categories;
+ db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
+
+ if (db->slc_lr == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -72,6 +86,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->slc_lr);
free(db->cat);
free(db->cfn);
@@ -95,6 +110,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_SLC_LR:
+ hw_db_inline_slc_lr_deref(ndev, db_handle,
+ *(struct hw_db_slc_lr_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -235,6 +255,90 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* SLC_LR */
+/******************************************************************************/
+
+static int hw_db_inline_slc_lr_compare(const struct hw_db_inline_slc_lr_data *data1,
+ const struct hw_db_inline_slc_lr_data *data2)
+{
+ if (!data1->head_slice_en)
+ return data1->head_slice_en == data2->head_slice_en;
+
+ return data1->head_slice_en == data2->head_slice_en &&
+ data1->head_slice_dyn == data2->head_slice_dyn &&
+ data1->head_slice_ofs == data2->head_slice_ofs;
+}
+
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_slc_lr_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_SLC_LR;
+
+ for (uint32_t i = 1; i < db->nb_slc_lr; ++i) {
+ int ref = db->slc_lr[i].ref;
+
+ if (ref > 0 && hw_db_inline_slc_lr_compare(data, &db->slc_lr[i].data)) {
+ idx.ids = i;
+ hw_db_inline_slc_lr_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->slc_lr[idx.ids].ref = 1;
+ memcpy(&db->slc_lr[idx.ids].data, data, sizeof(struct hw_db_inline_slc_lr_data));
+
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_SLC_EN, idx.ids, data->head_slice_en);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_DYN, idx.ids, data->head_slice_dyn);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_HEAD_OFS, idx.ids, data->head_slice_ofs);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->slc_lr[idx.ids].ref += 1;
+}
+
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->slc_lr[idx.ids].ref -= 1;
+
+ if (db->slc_lr[idx.ids].ref <= 0) {
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->slc_lr[idx.ids].data, 0x0, sizeof(struct hw_db_inline_slc_lr_data));
+ db->slc_lr[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 38502ac1ec..ef63336b1c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -40,10 +40,15 @@ struct hw_db_cat_idx {
HW_DB_IDX;
};
+struct hw_db_slc_lr_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_SLC_LR,
};
/* Functionality data types */
@@ -89,6 +94,13 @@ struct hw_db_inline_cot_data {
uint32_t padding : 24;
};
+struct hw_db_inline_slc_lr_data {
+ uint32_t head_slice_en : 1;
+ uint32_t head_slice_dyn : 5;
+ uint32_t head_slice_ofs : 8;
+ uint32_t padding : 18;
+};
+
struct hw_db_inline_hsh_data {
uint32_t func;
uint64_t hash_mask;
@@ -119,6 +131,13 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_slc_lr_data *data);
+void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_slc_lr_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 3cfeee2c25..4a5bcc04cf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2276,18 +2276,38 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
-static int setup_flow_flm_actions(struct flow_eth_dev *dev __rte_unused,
- const struct nic_flow_def *fd __rte_unused,
+static int setup_flow_flm_actions(struct flow_eth_dev *dev,
+ const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
- uint32_t local_idxs[] __rte_unused,
- uint32_t *local_idx_counter __rte_unused,
+ uint32_t local_idxs[],
+ uint32_t *local_idx_counter,
uint16_t *flm_rpl_ext_ptr __rte_unused,
uint32_t *flm_ft __rte_unused,
uint32_t *flm_scrub __rte_unused,
- struct rte_flow_error *error __rte_unused)
+ struct rte_flow_error *error)
{
+ /* Setup SLC LR */
+ struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
+
+ if (fd->header_strip_end_dyn != 0 || fd->header_strip_end_ofs != 0) {
+ struct hw_db_inline_slc_lr_data slc_lr_data = {
+ .head_slice_en = 1,
+ .head_slice_dyn = fd->header_strip_end_dyn,
+ .head_slice_ofs = fd->header_strip_end_ofs,
+ };
+ slc_lr_idx =
+ hw_db_inline_slc_lr_add(dev->ndev, dev->ndev->hw_db_handle, &slc_lr_data);
+ local_idxs[(*local_idx_counter)++] = slc_lr_idx.raw;
+
+ if (slc_lr_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference SLC LR resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -2449,6 +2469,9 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* SLC LR index 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2497,6 +2520,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
+ hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
+
hw_mod_tpe_reset(&ndev->be);
flow_nic_free_resource(ndev, RES_TPE_RCP, 0);
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 28/80] net/ntnic: add PDB module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (26 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 27/80] net/ntnic: add SLC LR module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 29/80] net/ntnic: add QSL module Serhii Iliushyk
` (51 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Packet Description Builder module creates packet meta-data
for example virtio-net headers.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c | 144 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 17 +++
3 files changed, 164 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 2711f44083..7f1449d8ee 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -740,6 +740,9 @@ int hw_mod_pdb_alloc(struct flow_api_backend_s *be);
void hw_mod_pdb_free(struct flow_api_backend_s *be);
int hw_mod_pdb_reset(struct flow_api_backend_s *be);
int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value);
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be);
struct tpe_func_s {
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
index c3facacb08..59285405ba 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_pdb.c
@@ -85,6 +85,150 @@ int hw_mod_pdb_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->pdb_rcp_flush(be->be_dev, &be->pdb, start_idx, count);
}
+static int hw_mod_pdb_rcp_mod(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 9:
+ switch (field) {
+ case HW_PDB_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->pdb.v9.rcp[index], (uint8_t)*value,
+ sizeof(struct pdb_v9_rcp_s));
+ break;
+
+ case HW_PDB_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value,
+ be->pdb.nb_pdb_rcp_categories);
+ break;
+
+ case HW_PDB_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->pdb.nb_pdb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->pdb.v9.rcp, struct pdb_v9_rcp_s, index, *value);
+ break;
+
+ case HW_PDB_RCP_DESCRIPTOR:
+ GET_SET(be->pdb.v9.rcp[index].descriptor, value);
+ break;
+
+ case HW_PDB_RCP_DESC_LEN:
+ GET_SET(be->pdb.v9.rcp[index].desc_len, value);
+ break;
+
+ case HW_PDB_RCP_TX_PORT:
+ GET_SET(be->pdb.v9.rcp[index].tx_port, value);
+ break;
+
+ case HW_PDB_RCP_TX_IGNORE:
+ GET_SET(be->pdb.v9.rcp[index].tx_ignore, value);
+ break;
+
+ case HW_PDB_RCP_TX_NOW:
+ GET_SET(be->pdb.v9.rcp[index].tx_now, value);
+ break;
+
+ case HW_PDB_RCP_CRC_OVERWRITE:
+ GET_SET(be->pdb.v9.rcp[index].crc_overwrite, value);
+ break;
+
+ case HW_PDB_RCP_ALIGN:
+ GET_SET(be->pdb.v9.rcp[index].align, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs0_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS0_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs0_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs1_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS1_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs1_rel, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_DYN:
+ GET_SET(be->pdb.v9.rcp[index].ofs2_dyn, value);
+ break;
+
+ case HW_PDB_RCP_OFS2_REL:
+ GET_SET_SIGNED(be->pdb.v9.rcp[index].ofs2_rel, value);
+ break;
+
+ case HW_PDB_RCP_IP_PROT_TNL:
+ GET_SET(be->pdb.v9.rcp[index].ip_prot_tnl, value);
+ break;
+
+ case HW_PDB_RCP_PPC_HSH:
+ GET_SET(be->pdb.v9.rcp[index].ppc_hsh, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_EN:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_en, value);
+ break;
+
+ case HW_PDB_RCP_DUPLICATE_BIT:
+ GET_SET(be->pdb.v9.rcp[index].duplicate_bit, value);
+ break;
+
+ case HW_PDB_RCP_PCAP_KEEP_FCS:
+ GET_SET(be->pdb.v9.rcp[index].pcap_keep_fcs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 9 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_pdb_rcp_set(struct flow_api_backend_s *be, enum hw_pdb_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_pdb_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_pdb_config_flush(struct flow_api_backend_s *be)
{
return be->iface->pdb_config_flush(be->be_dev, &be->pdb);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 4a5bcc04cf..7033674270 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2472,6 +2472,19 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ /* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
+ */
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESCRIPTOR, 0, 7) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_DESC_LEN, 0, 6) < 0)
+ goto err_exit0;
+
+ if (hw_mod_pdb_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
@@ -2529,6 +2542,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_TPE_EXT, 0);
flow_nic_free_resource(ndev, RES_TPE_RPL, 0);
+ hw_mod_pdb_rcp_set(&ndev->be, HW_PDB_RCP_PRESET_ALL, 0, 0);
+ hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 29/80] net/ntnic: add QSL module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (27 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 28/80] net/ntnic: add PDB module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 30/80] net/ntnic: add KM module Serhii Iliushyk
` (50 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Queue Selector module directs packets to a given destination
which includes host queues, physical ports, exceptions paths, and discard.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/hw_mod_backend.h | 8 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 65 ++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c | 218 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 96 +++++++-
7 files changed, 595 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 7f031ccda8..edffd0a57a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -184,8 +184,11 @@ extern const char *dbg_res_descr[];
int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
uint32_t alignment);
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment);
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx);
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
#endif
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7f1449d8ee..6fa2a3d94f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -666,8 +666,16 @@ int hw_mod_qsl_alloc(struct flow_api_backend_s *be);
void hw_mod_qsl_free(struct flow_api_backend_s *be);
int hw_mod_qsl_reset(struct flow_api_backend_s *be);
int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value);
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value);
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_qsl_unmq_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
uint32_t value);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 4bd68c572b..22d7905c62 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -104,11 +104,52 @@ int flow_nic_alloc_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
return -1;
}
+int flow_nic_alloc_resource_config(struct flow_nic_dev *ndev, enum res_type_e res_type,
+ unsigned int num, uint32_t alignment)
+{
+ unsigned int idx_offs;
+
+ for (unsigned int res_idx = 0; res_idx < ndev->res[res_type].resource_count - (num - 1);
+ res_idx += alignment) {
+ if (!flow_nic_is_resource_used(ndev, res_type, res_idx)) {
+ for (idx_offs = 1; idx_offs < num; idx_offs++)
+ if (flow_nic_is_resource_used(ndev, res_type, res_idx + idx_offs))
+ break;
+
+ if (idx_offs < num)
+ continue;
+
+ /* found a contiguous number of "num" res_type elements - allocate them */
+ for (idx_offs = 0; idx_offs < num; idx_offs++) {
+ flow_nic_mark_resource_used(ndev, res_type, res_idx + idx_offs);
+ ndev->res[res_type].ref[res_idx + idx_offs] = 1;
+ }
+
+ return res_idx;
+ }
+ }
+
+ return -1;
+}
+
void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int idx)
{
flow_nic_mark_resource_unused(ndev, res_type, idx);
}
+int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
+{
+ NT_LOG(DBG, FILTER, "Reference resource %s idx %i (before ref cnt %i)",
+ dbg_res_descr[res_type], index, ndev->res[res_type].ref[index]);
+ assert(flow_nic_is_resource_used(ndev, res_type, index));
+
+ if (ndev->res[res_type].ref[index] == (uint32_t)-1)
+ return -1;
+
+ ndev->res[res_type].ref[index]++;
+ return 0;
+}
+
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index)
{
NT_LOG(DBG, FILTER, "De-reference resource %s idx %i (before ref cnt %i)",
@@ -346,6 +387,18 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
hw_mod_qsl_unmq_set(&ndev->be, HW_QSL_UNMQ_EN, eth_dev->port, 0);
hw_mod_qsl_unmq_flush(&ndev->be, eth_dev->port, 1);
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (int i = 0; i < eth_dev->num_queues; ++i) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value & ~(1U << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
#ifdef FLOW_DEBUG
ndev->be.iface->set_debug_mode(ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
#endif
@@ -546,6 +599,18 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
eth_dev->rss_target_id = -1;
+ if (flow_profile == FLOW_ETH_DEV_PROFILE_INLINE) {
+ for (i = 0; i < eth_dev->num_queues; i++) {
+ uint32_t qen_value = 0;
+ uint32_t queue_id = (uint32_t)eth_dev->rx_queue[i].hw_id;
+
+ hw_mod_qsl_qen_get(&ndev->be, HW_QSL_QEN_EN, queue_id / 4, &qen_value);
+ hw_mod_qsl_qen_set(&ndev->be, HW_QSL_QEN_EN, queue_id / 4,
+ qen_value | (1 << (queue_id % 4)));
+ hw_mod_qsl_qen_flush(&ndev->be, queue_id / 4, 1);
+ }
+ }
+
*rss_target_id = eth_dev->rss_target_id;
nic_insert_eth_port_dev(ndev, eth_dev);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
index 93b37d595e..70fe97a298 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_qsl.c
@@ -104,6 +104,114 @@ int hw_mod_qsl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_rcp_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_rcp_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.rcp[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_rcp_s));
+ break;
+
+ case HW_QSL_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value,
+ be->qsl.nb_rcp_categories);
+ break;
+
+ case HW_QSL_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->qsl.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->qsl.v7.rcp, struct qsl_v7_rcp_s, index, *value);
+ break;
+
+ case HW_QSL_RCP_DISCARD:
+ GET_SET(be->qsl.v7.rcp[index].discard, value);
+ break;
+
+ case HW_QSL_RCP_DROP:
+ GET_SET(be->qsl.v7.rcp[index].drop, value);
+ break;
+
+ case HW_QSL_RCP_TBL_LO:
+ GET_SET(be->qsl.v7.rcp[index].tbl_lo, value);
+ break;
+
+ case HW_QSL_RCP_TBL_HI:
+ GET_SET(be->qsl.v7.rcp[index].tbl_hi, value);
+ break;
+
+ case HW_QSL_RCP_TBL_IDX:
+ GET_SET(be->qsl.v7.rcp[index].tbl_idx, value);
+ break;
+
+ case HW_QSL_RCP_TBL_MSK:
+ GET_SET(be->qsl.v7.rcp[index].tbl_msk, value);
+ break;
+
+ case HW_QSL_RCP_LR:
+ GET_SET(be->qsl.v7.rcp[index].lr, value);
+ break;
+
+ case HW_QSL_RCP_TSA:
+ GET_SET(be->qsl.v7.rcp[index].tsa, value);
+ break;
+
+ case HW_QSL_RCP_VLI:
+ GET_SET(be->qsl.v7.rcp[index].vli, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_rcp_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_rcp_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -117,6 +225,73 @@ int hw_mod_qsl_qst_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qst_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qst_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= be->qsl.nb_qst_entries) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->qsl.v7.qst[index], (uint8_t)*value,
+ sizeof(struct qsl_v7_qst_s));
+ break;
+
+ case HW_QSL_QST_QUEUE:
+ GET_SET(be->qsl.v7.qst[index].queue, value);
+ break;
+
+ case HW_QSL_QST_EN:
+ GET_SET(be->qsl.v7.qst[index].en, value);
+ break;
+
+ case HW_QSL_QST_TX_PORT:
+ GET_SET(be->qsl.v7.qst[index].tx_port, value);
+ break;
+
+ case HW_QSL_QST_LRE:
+ GET_SET(be->qsl.v7.qst[index].lre, value);
+ break;
+
+ case HW_QSL_QST_TCI:
+ GET_SET(be->qsl.v7.qst[index].tci, value);
+ break;
+
+ case HW_QSL_QST_VEN:
+ GET_SET(be->qsl.v7.qst[index].ven, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qst_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
@@ -130,6 +305,49 @@ int hw_mod_qsl_qen_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->qsl_qen_flush(be->be_dev, &be->qsl, start_idx, count);
}
+static int hw_mod_qsl_qen_mod(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value, int get)
+{
+ if (index >= QSL_QEN_ENTRIES) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_QSL_QEN_EN:
+ GET_SET(be->qsl.v7.qen[index].en, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_qsl_qen_set(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, &value, 0);
+}
+
+int hw_mod_qsl_qen_get(struct flow_api_backend_s *be, enum hw_qsl_e field, uint32_t index,
+ uint32_t *value)
+{
+ return hw_mod_qsl_qen_mod(be, field, index, value, 1);
+}
+
int hw_mod_qsl_unmq_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b17bce3745..5572662647 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -20,12 +20,18 @@ struct hw_db_inline_resource_db {
int ref;
} *cot;
+ struct hw_db_inline_resource_db_qsl {
+ struct hw_db_inline_qsl_data data;
+ int qst_idx;
+ } *qsl;
+
struct hw_db_inline_resource_db_slc_lr {
struct hw_db_inline_slc_lr_data data;
int ref;
} *slc_lr;
uint32_t nb_cot;
+ uint32_t nb_qsl;
uint32_t nb_slc_lr;
/* Items */
@@ -61,6 +67,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_qsl = ndev->be.qsl.nb_rcp_categories;
+ db->qsl = calloc(db->nb_qsl, sizeof(struct hw_db_inline_resource_db_qsl));
+
+ if (db->qsl == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_slc_lr = ndev->be.max_categories;
db->slc_lr = calloc(db->nb_slc_lr, sizeof(struct hw_db_inline_resource_db_slc_lr));
@@ -86,6 +100,7 @@ void hw_db_inline_destroy(void *db_handle)
struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
free(db->cot);
+ free(db->qsl);
free(db->slc_lr);
free(db->cat);
@@ -110,6 +125,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_cot_deref(ndev, db_handle, *(struct hw_db_cot_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_QSL:
+ hw_db_inline_qsl_deref(ndev, db_handle, *(struct hw_db_qsl_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_SLC_LR:
hw_db_inline_slc_lr_deref(ndev, db_handle,
*(struct hw_db_slc_lr_idx *)&idxs[i]);
@@ -145,6 +164,13 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
+ /* QSL for traffic policing */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_hw_id, 0x3) < 0)
+ return -1;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, qsl_hw_id, 1) < 0)
+ return -1;
+
/* Select and enable QSL recipe */
if (hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id))
return -1;
@@ -255,6 +281,175 @@ void hw_db_inline_cot_deref(struct flow_nic_dev *ndev __rte_unused, void *db_han
}
}
+/******************************************************************************/
+/* QSL */
+/******************************************************************************/
+
+/* Calculate queue mask for QSL TBL_MSK for given number of queues.
+ * NOTE: If number of queues is not power of two, then queue mask will be created
+ * for nearest smaller power of two.
+ */
+static uint32_t queue_mask(uint32_t nr_queues)
+{
+ nr_queues |= nr_queues >> 1;
+ nr_queues |= nr_queues >> 2;
+ nr_queues |= nr_queues >> 4;
+ nr_queues |= nr_queues >> 8;
+ nr_queues |= nr_queues >> 16;
+ return nr_queues >> 1;
+}
+
+static int hw_db_inline_qsl_compare(const struct hw_db_inline_qsl_data *data1,
+ const struct hw_db_inline_qsl_data *data2)
+{
+ if (data1->discard != data2->discard || data1->drop != data2->drop ||
+ data1->table_size != data2->table_size || data1->retransmit != data2->retransmit) {
+ return 0;
+ }
+
+ for (int i = 0; i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ if (data1->table[i].queue != data2->table[i].queue ||
+ data1->table[i].queue_en != data2->table[i].queue_en ||
+ data1->table[i].tx_port != data2->table[i].tx_port ||
+ data1->table[i].tx_port_en != data2->table[i].tx_port_en) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_qsl_idx qsl_idx = { .raw = 0 };
+ uint32_t qst_idx = 0;
+ int res;
+
+ qsl_idx.type = HW_DB_IDX_TYPE_QSL;
+
+ if (data->discard) {
+ qsl_idx.ids = 0;
+ return qsl_idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_qsl; ++i) {
+ if (hw_db_inline_qsl_compare(data, &db->qsl[i].data)) {
+ qsl_idx.ids = i;
+ hw_db_inline_qsl_ref(ndev, db, qsl_idx);
+ return qsl_idx;
+ }
+ }
+
+ res = flow_nic_alloc_resource(ndev, RES_QSL_RCP, 1);
+
+ if (res < 0) {
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qsl_idx.ids = res & 0xff;
+
+ if (data->table_size > 0) {
+ res = flow_nic_alloc_resource_config(ndev, RES_QSL_QST, data->table_size, 1);
+
+ if (res < 0) {
+ flow_nic_deref_resource(ndev, RES_QSL_RCP, qsl_idx.ids);
+ qsl_idx.error = 1;
+ return qsl_idx;
+ }
+
+ qst_idx = (uint32_t)res;
+ }
+
+ memcpy(&db->qsl[qsl_idx.ids].data, data, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[qsl_idx.ids].qst_idx = qst_idx;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, qsl_idx.ids, 0x0);
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, qsl_idx.ids, data->discard);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DROP, qsl_idx.ids, data->drop * 0x3);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_LR, qsl_idx.ids, data->retransmit * 0x3);
+
+ if (data->table_size == 0) {
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, 0x0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, 0x0);
+
+ } else {
+ const uint32_t table_start = qst_idx;
+ const uint32_t table_end = table_start + data->table_size - 1;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_LO, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_HI, qsl_idx.ids, table_end);
+
+ /* Toeplitz hash function uses TBL_IDX and TBL_MSK. */
+ uint32_t msk = queue_mask(table_end - table_start + 1);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_IDX, qsl_idx.ids, table_start);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_TBL_MSK, qsl_idx.ids, msk);
+
+ for (uint32_t i = 0; i < data->table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, table_start + i, 0x0);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_TX_PORT, table_start + i,
+ data->table[i].tx_port);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_LRE, table_start + i,
+ data->table[i].tx_port_en);
+
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_QUEUE, table_start + i,
+ data->table[i].queue);
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_EN, table_start + i,
+ data->table[i].queue_en);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, data->table_size);
+ }
+
+ hw_mod_qsl_rcp_flush(&ndev->be, qsl_idx.ids, 1);
+
+ return qsl_idx;
+}
+
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ (void)db_handle;
+
+ if (!idx.error && idx.ids != 0)
+ flow_nic_ref_resource(ndev, RES_QSL_RCP, idx.ids);
+}
+
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error || idx.ids == 0)
+ return;
+
+ if (flow_nic_deref_resource(ndev, RES_QSL_RCP, idx.ids) == 0) {
+ const int table_size = (int)db->qsl[idx.ids].data.table_size;
+
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, idx.ids, 0x0);
+ hw_mod_qsl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ if (table_size > 0) {
+ const int table_start = db->qsl[idx.ids].qst_idx;
+
+ for (int i = 0; i < (int)table_size; ++i) {
+ hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL,
+ table_start + i, 0x0);
+ flow_nic_free_resource(ndev, RES_QSL_QST, table_start + i);
+ }
+
+ hw_mod_qsl_qst_flush(&ndev->be, table_start, table_size);
+ }
+
+ memset(&db->qsl[idx.ids].data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+ db->qsl[idx.ids].qst_idx = 0;
+ }
+}
+
/******************************************************************************/
/* SLC_LR */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index ef63336b1c..d0435acaef 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -36,6 +36,10 @@ struct hw_db_cot_idx {
HW_DB_IDX;
};
+struct hw_db_qsl_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cat_idx {
HW_DB_IDX;
};
@@ -48,6 +52,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
+ HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
};
@@ -113,6 +118,7 @@ struct hw_db_inline_action_set_data {
int jump;
struct {
struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
};
};
};
@@ -131,6 +137,11 @@ struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_ha
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
void hw_db_inline_cot_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
+struct hw_db_qsl_idx hw_db_inline_qsl_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_qsl_data *data);
+void hw_db_inline_qsl_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+void hw_db_inline_qsl_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_qsl_idx idx);
+
struct hw_db_slc_lr_idx hw_db_inline_slc_lr_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_slc_lr_data *data);
void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7033674270..a5b15bc281 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2276,9 +2276,55 @@ static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_d
return 0;
}
+
+static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_data *qsl_data,
+ uint32_t num_dest_port, uint32_t num_queues)
+{
+ memset(qsl_data, 0x0, sizeof(struct hw_db_inline_qsl_data));
+
+ if (fd->dst_num_avail <= 0) {
+ qsl_data->drop = 1;
+
+ } else {
+ assert(fd->dst_num_avail < HW_DB_INLINE_MAX_QST_PER_QSL);
+
+ uint32_t ports[fd->dst_num_avail];
+ uint32_t queues[fd->dst_num_avail];
+
+ uint32_t port_index = 0;
+ uint32_t queue_index = 0;
+ uint32_t max = num_dest_port > num_queues ? num_dest_port : num_queues;
+
+ memset(ports, 0, fd->dst_num_avail);
+ memset(queues, 0, fd->dst_num_avail);
+
+ qsl_data->table_size = max;
+ qsl_data->retransmit = num_dest_port > 0 ? 1 : 0;
+
+ for (int i = 0; i < fd->dst_num_avail; ++i)
+ if (fd->dst_id[i].type == PORT_PHY)
+ ports[port_index++] = fd->dst_id[i].id;
+
+ else if (fd->dst_id[i].type == PORT_VIRT)
+ queues[queue_index++] = fd->dst_id[i].id;
+
+ for (uint32_t i = 0; i < max; ++i) {
+ if (num_dest_port > 0) {
+ qsl_data->table[i].tx_port = ports[i % num_dest_port];
+ qsl_data->table[i].tx_port_en = 1;
+ }
+
+ if (num_queues > 0) {
+ qsl_data->table[i].queue = queues[i % num_queues];
+ qsl_data->table[i].queue_en = 1;
+ }
+ }
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
- const struct hw_db_inline_qsl_data *qsl_data __rte_unused,
+ const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
uint32_t group __rte_unused,
uint32_t local_idxs[],
@@ -2288,6 +2334,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
+ local_idxs[(*local_idx_counter)++] = qsl_idx.raw;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2328,6 +2385,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
fh->caller_id = caller_id;
struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
@@ -2398,6 +2456,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Finalize QSL */
+ struct hw_db_qsl_idx qsl_idx =
+ hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle,
+ &qsl_data);
+ fh->db_idxs[fh->db_idx_counter++] = qsl_idx.raw;
+ action_set_data.qsl = qsl_idx;
+
+ if (qsl_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference QSL resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2469,6 +2540,24 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (hw_mod_cat_cot_flush(&ndev->be, 0, 1) < 0)
goto err_exit0;
+ /* Initialize QSL with unmatched recipe index 0 - discard */
+ if (hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_DISCARD, 0, 0x1) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_rcp_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, 0);
+
+ /* Initialize QST with default index 0 */
+ if (hw_mod_qsl_qst_set(&ndev->be, HW_QSL_QST_PRESET_ALL, 0, 0x0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_qsl_qst_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
+
/* SLC LR index 0 is reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
@@ -2487,6 +2576,7 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
+ flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
if (hw_db_inline_setup_mbr_filter(ndev, NT_VIOLATING_MBR_CFN,
NT_FLM_VIOLATING_MBR_FLOW_TYPE,
@@ -2533,6 +2623,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_cat_cot_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_CAT_CFN, 0);
+ hw_mod_qsl_rcp_set(&ndev->be, HW_QSL_RCP_PRESET_ALL, 0, 0);
+ hw_mod_qsl_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_QSL_RCP, 0);
+
hw_mod_slc_lr_rcp_set(&ndev->be, HW_SLC_LR_RCP_PRESET_ALL, 0, 0);
hw_mod_slc_lr_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_SLC_LR_RCP, 0);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 30/80] net/ntnic: add KM module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (28 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 29/80] net/ntnic: add QSL module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 31/80] net/ntnic: add hash API Serhii Iliushyk
` (49 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Key Matcher module checks the values of individual fields of a packet.
It supports both exact match which is implemented with a CAM,
and wildcards which is implemented with a TCAM.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 2 +
drivers/net/ntnic/include/flow_api_engine.h | 110 +-
drivers/net/ntnic/include/hw_mod_backend.h | 64 +-
drivers/net/ntnic/nthw/flow_api/flow_km.c | 1065 +++++++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_km.c | 380 ++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++
.../profile_inline/flow_api_hw_db_inline.h | 38 +
.../profile_inline/flow_api_profile_inline.c | 162 +++
8 files changed, 2026 insertions(+), 29 deletions(-)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index cd7d315456..ed306e05b5 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -52,6 +52,8 @@ Features
- Encapsulation and decapsulation of GTP data.
- RX VLAN stripping via raw decap.
- TX VLAN insertion via raw encap.
+- CAM and TCAM based matching.
+- Exact match of 140 million flows and policies.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b1d39b919b..a0f02f4e8a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -52,34 +52,32 @@ enum res_type_e {
*/
#define MAX_OUTPUT_DEST (128)
+#define MAX_WORD_NUM 24
+#define MAX_BANKS 6
+
+#define MAX_TCAM_START_OFFSETS 4
+
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
/*
- * Tunnel encapsulation header definition
+ * 128 128 32 32 32
+ * Have | QW0 || QW4 || SW8 || SW9 | SWX in FPGA
+ *
+ * Each word may start at any offset, though
+ * they are combined in chronological order, with all enabled to
+ * build the extracted match data, thus that is how the match key
+ * must be build
*/
-#define MAX_TUN_HDR_SIZE 128
-struct tunnel_header_s {
- union {
- uint8_t hdr8[MAX_TUN_HDR_SIZE];
- uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
- } d;
- uint32_t user_port_id;
- uint8_t len;
-
- uint8_t nb_vlans;
-
- uint8_t ip_version; /* 4: v4, 6: v6 */
- uint16_t ip_csum_precalc;
-
- uint8_t new_outer;
- uint8_t l2_len;
- uint8_t l3_len;
- uint8_t l4_len;
+enum extractor_e {
+ KM_USE_EXTRACTOR_UNDEF,
+ KM_USE_EXTRACTOR_QWORD,
+ KM_USE_EXTRACTOR_SWORD,
};
struct match_elem_s {
+ enum extractor_e extr;
int masked_for_tcam; /* if potentially selected for TCAM */
uint32_t e_word[4];
uint32_t e_mask[4];
@@ -89,16 +87,76 @@ struct match_elem_s {
uint32_t word_len;
};
+enum cam_tech_use_e {
+ KM_CAM,
+ KM_TCAM,
+ KM_SYNERGY
+};
+
struct km_flow_def_s {
struct flow_api_backend_s *be;
+ /* For keeping track of identical entries */
+ struct km_flow_def_s *reference;
+ struct km_flow_def_s *root;
+
/* For collect flow elements and sorting */
struct match_elem_s match[MAX_MATCH_FIELDS];
+ struct match_elem_s *match_map[MAX_MATCH_FIELDS];
int num_ftype_elem;
+ /* Finally formatted CAM/TCAM entry */
+ enum cam_tech_use_e target;
+ uint32_t entry_word[MAX_WORD_NUM];
+ uint32_t entry_mask[MAX_WORD_NUM];
+ int key_word_size;
+
+ /* TCAM calculated possible bank start offsets */
+ int start_offsets[MAX_TCAM_START_OFFSETS];
+ int num_start_offsets;
+
/* Flow information */
/* HW input port ID needed for compare. In port must be identical on flow types */
uint32_t port_id;
+ uint32_t info; /* used for color (actions) */
+ int info_set;
+ int flow_type; /* 0 is illegal and used as unset */
+ int flushed_to_target; /* if this km entry has been finally programmed into NIC hw */
+
+ /* CAM specific bank management */
+ int cam_paired;
+ int record_indexes[MAX_BANKS];
+ int bank_used;
+ uint32_t *cuckoo_moves; /* for CAM statistics only */
+ struct cam_distrib_s *cam_dist;
+
+ /* TCAM specific bank management */
+ struct tcam_distrib_s *tcam_dist;
+ int tcam_start_bank;
+ int tcam_record;
+};
+
+/*
+ * Tunnel encapsulation header definition
+ */
+#define MAX_TUN_HDR_SIZE 128
+
+struct tunnel_header_s {
+ union {
+ uint8_t hdr8[MAX_TUN_HDR_SIZE];
+ uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4];
+ } d;
+
+ uint8_t len;
+
+ uint8_t nb_vlans;
+
+ uint8_t ip_version; /* 4: v4, 6: v6 */
+
+ uint8_t new_outer;
+ uint8_t l2_len;
+ uint8_t l3_len;
+ uint8_t l4_len;
};
enum flow_port_type_e {
@@ -247,11 +305,25 @@ struct flow_handle {
};
};
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_mask[4],
uint32_t word_len, enum frame_offs_e start, int8_t offset);
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id);
+/*
+ * Compares 2 KM key definitions after first collect validate and optimization.
+ * km is compared against an existing km1.
+ * if identical, km1 flow_type is returned
+ */
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1);
+
+int km_rcp_set(struct km_flow_def_s *km, int index);
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color);
+int km_clear_data_match_entry(struct km_flow_def_s *km);
+
void kcc_free_ndev_resource_management(void **handle);
/*
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 6fa2a3d94f..26903f2183 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -132,6 +132,22 @@ static inline int is_non_zero(const void *addr, size_t n)
return 0;
}
+/* Sideband info bit indicator */
+#define SWX_INFO (1 << 6)
+
+enum km_flm_if_select_e {
+ KM_FLM_IF_FIRST = 0,
+ KM_FLM_IF_SECOND = 1
+};
+
+#define FIELD_START_INDEX 100
+
+#define COMMON_FUNC_INFO_S \
+ int ver; \
+ void *base; \
+ unsigned int alloced_size; \
+ int debug
+
enum frame_offs_e {
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
@@ -141,22 +157,39 @@ enum frame_offs_e {
DYN_TUN_L3 = 13,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ SB_VNI = SWX_INFO | 1,
+ SB_MAC_PORT = SWX_INFO | 2,
+ SB_KCC_ID = SWX_INFO | 3
};
-/* Sideband info bit indicator */
+enum {
+ QW0_SEL_EXCLUDE = 0,
+ QW0_SEL_FIRST32 = 1,
+ QW0_SEL_FIRST64 = 3,
+ QW0_SEL_ALL128 = 4,
+};
-enum km_flm_if_select_e {
- KM_FLM_IF_FIRST = 0,
- KM_FLM_IF_SECOND = 1
+enum {
+ QW4_SEL_EXCLUDE = 0,
+ QW4_SEL_FIRST32 = 1,
+ QW4_SEL_FIRST64 = 2,
+ QW4_SEL_ALL128 = 3,
};
-#define FIELD_START_INDEX 100
+enum {
+ DW8_SEL_EXCLUDE = 0,
+ DW8_SEL_FIRST32 = 3,
+};
-#define COMMON_FUNC_INFO_S \
- int ver; \
- void *base; \
- unsigned int alloced_size; \
- int debug
+enum {
+ DW10_SEL_EXCLUDE = 0,
+ DW10_SEL_FIRST32 = 2,
+};
+
+enum {
+ SWX_SEL_EXCLUDE = 0,
+ SWX_SEL_ALL32 = 1,
+};
enum {
PROT_OTHER = 0,
@@ -440,13 +473,24 @@ int hw_mod_km_alloc(struct flow_api_backend_s *be);
void hw_mod_km_free(struct flow_api_backend_s *be);
int hw_mod_km_reset(struct flow_api_backend_s *be);
int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value);
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value);
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count);
int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
int byte_val, uint32_t *value_set);
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set);
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value);
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record,
int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 237e9f7b4e..30d6ea728e 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -10,8 +10,34 @@
#include "flow_api_engine.h"
#include "nt_util.h"
+#define MAX_QWORDS 2
+#define MAX_SWORDS 2
+
+#define CUCKOO_MOVE_MAX_DEPTH 8
+
#define NUM_CAM_MASKS (ARRAY_SIZE(cam_masks))
+#define CAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_cam_records + (rec))
+#define CAM_KM_DIST_IDX(bnk) \
+ ({ \
+ int _temp_bnk = (bnk); \
+ CAM_DIST_IDX(_temp_bnk, km->record_indexes[_temp_bnk]); \
+ })
+
+#define TCAM_DIST_IDX(bnk, rec) ((bnk) * km->be->km.nb_tcam_bank_width + (rec))
+
+#define CAM_ENTRIES \
+ (km->be->km.nb_cam_banks * km->be->km.nb_cam_records * sizeof(struct cam_distrib_s))
+#define TCAM_ENTRIES \
+ (km->be->km.nb_tcam_bank_width * km->be->km.nb_tcam_banks * sizeof(struct tcam_distrib_s))
+
+/*
+ * CAM structures and defines
+ */
+struct cam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
static const struct cam_match_masks_s {
uint32_t word_len;
uint32_t key_mask[4];
@@ -36,6 +62,25 @@ static const struct cam_match_masks_s {
{ 1, { 0x00300000, 0x00000000, 0x00000000, 0x00000000 } },
};
+static int cam_addr_reserved_stack[CUCKOO_MOVE_MAX_DEPTH];
+
+/*
+ * TCAM structures and defines
+ */
+struct tcam_distrib_s {
+ struct km_flow_def_s *km_owner;
+};
+
+static int tcam_find_mapping(struct km_flow_def_s *km);
+
+void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
+{
+ km->cam_dist = (struct cam_distrib_s *)*handle;
+ km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
+ km->tcam_dist =
+ (struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+}
+
void km_free_ndev_resource_management(void **handle)
{
if (*handle) {
@@ -98,3 +143,1023 @@ int km_add_match_elem(struct km_flow_def_s *km, uint32_t e_word[4], uint32_t e_m
km->num_ftype_elem++;
return 0;
}
+
+static int get_word(struct km_flow_def_s *km, uint32_t size, int marked[])
+{
+ for (int i = 0; i < km->num_ftype_elem; i++)
+ if (!marked[i] && !(km->match[i].extr_start_offs_id & SWX_INFO) &&
+ km->match[i].word_len == size)
+ return i;
+
+ return -1;
+}
+
+int km_key_create(struct km_flow_def_s *km, uint32_t port_id)
+{
+ /*
+ * Create combined extractor mappings
+ * if key fields may be changed to cover un-mappable otherwise?
+ * split into cam and tcam and use synergy mode when available
+ */
+ int match_marked[MAX_MATCH_FIELDS];
+ int idx = 0;
+ int next = 0;
+ int m_idx;
+ int size;
+
+ memset(match_marked, 0, sizeof(match_marked));
+
+ /* build QWords */
+ for (int qwords = 0; qwords < MAX_QWORDS; qwords++) {
+ size = 4;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 2;
+ m_idx = get_word(km, size, match_marked);
+
+ if (m_idx < 0) {
+ size = 1;
+ m_idx = get_word(km, 1, match_marked);
+ }
+ }
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_QWORD;
+
+ /* build final entry words and mask array */
+ for (int i = 0; i < size; i++) {
+ km->entry_word[idx + i] = km->match[m_idx].e_word[i];
+ km->entry_mask[idx + i] = km->match[m_idx].e_mask[i];
+ }
+
+ idx += size;
+ next++;
+ }
+
+ m_idx = get_word(km, 4, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more QWords */
+ return -1;
+ }
+
+ /*
+ * On km v6+ we have DWORDs here instead. However, we only use them as SWORDs for now
+ * No match would be able to exploit these as DWORDs because of maximum length of 12 words
+ * in CAM The last 2 words are taken by KCC-ID/SWX and Color. You could have one or none
+ * QWORDs where then both these DWORDs were possible in 10 words, but we don't have such
+ * use case built in yet
+ */
+ /* build SWords */
+ for (int swords = 0; swords < MAX_SWORDS; swords++) {
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx < 0) {
+ /* no more defined */
+ break;
+ }
+
+ match_marked[m_idx] = 1;
+ /* build match map list and set final extractor to use */
+ km->match_map[next] = &km->match[m_idx];
+ km->match[m_idx].extr = KM_USE_EXTRACTOR_SWORD;
+
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[m_idx].e_word[0];
+ km->entry_mask[idx] = km->match[m_idx].e_mask[0];
+ idx++;
+ next++;
+ }
+
+ /*
+ * Make sure we took them all
+ */
+ m_idx = get_word(km, 1, match_marked);
+
+ if (m_idx >= 0) {
+ /* cannot match more SWords */
+ return -1;
+ }
+
+ /*
+ * Handle SWX words specially
+ */
+ int swx_found = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id & SWX_INFO) {
+ km->match_map[next] = &km->match[i];
+ km->match[i].extr = KM_USE_EXTRACTOR_SWORD;
+ /* build final entry words and mask array */
+ km->entry_word[idx] = km->match[i].e_word[0];
+ km->entry_mask[idx] = km->match[i].e_mask[0];
+ idx++;
+ next++;
+ swx_found = 1;
+ }
+ }
+
+ assert(next == km->num_ftype_elem);
+
+ km->key_word_size = idx;
+ km->port_id = port_id;
+
+ km->target = KM_CAM;
+
+ /*
+ * Finally decide if we want to put this match->action into the TCAM
+ * When SWX word used we need to put it into CAM always, no matter what mask pattern
+ * Later, when synergy mode is applied, we can do a split
+ */
+ if (!swx_found && km->key_word_size <= 6) {
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match_map[i]->masked_for_tcam) {
+ /* At least one */
+ km->target = KM_TCAM;
+ }
+ }
+ }
+
+ NT_LOG(DBG, FILTER, "This flow goes into %s", (km->target == KM_TCAM) ? "TCAM" : "CAM");
+
+ if (km->target == KM_TCAM) {
+ if (km->key_word_size > 10) {
+ /* do not support SWX in TCAM */
+ return -1;
+ }
+
+ /*
+ * adjust for unsupported key word size in TCAM
+ */
+ if ((km->key_word_size == 5 || km->key_word_size == 7 || km->key_word_size == 9)) {
+ km->entry_mask[km->key_word_size] = 0;
+ km->key_word_size++;
+ }
+
+ /*
+ * 1. the fact that the length of a key cannot change among the same used banks
+ *
+ * calculate possible start indexes
+ * unfortunately restrictions in TCAM lookup
+ * makes it hard to handle key lengths larger than 6
+ * when other sizes should be possible too
+ */
+ switch (km->key_word_size) {
+ case 1:
+ for (int i = 0; i < 4; i++)
+ km->start_offsets[0] = 8 + i;
+
+ km->num_start_offsets = 4;
+ break;
+
+ case 2:
+ km->start_offsets[0] = 6;
+ km->num_start_offsets = 1;
+ break;
+
+ case 3:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 4:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ /* enlarge to 6 */
+ km->entry_mask[km->key_word_size++] = 0;
+ km->entry_mask[km->key_word_size++] = 0;
+ break;
+
+ case 6:
+ km->start_offsets[0] = 0;
+ km->num_start_offsets = 1;
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Final Key word size too large: %i",
+ km->key_word_size);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+int km_key_compare(struct km_flow_def_s *km, struct km_flow_def_s *km1)
+{
+ if (km->target != km1->target || km->num_ftype_elem != km1->num_ftype_elem ||
+ km->key_word_size != km1->key_word_size || km->info_set != km1->info_set)
+ return 0;
+
+ /*
+ * before KCC-CAM:
+ * if port is added to match, then we can have different ports in CAT
+ * that reuses this flow type
+ */
+ int port_match_included = 0, kcc_swx_used = 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ if (km->match[i].extr_start_offs_id == SB_MAC_PORT) {
+ port_match_included = 1;
+ break;
+ }
+
+ if (km->match_map[i]->extr_start_offs_id == SB_KCC_ID) {
+ kcc_swx_used = 1;
+ break;
+ }
+ }
+
+ /*
+ * If not using KCC and if port match is not included in CAM,
+ * we need to have same port_id to reuse
+ */
+ if (!kcc_swx_used && !port_match_included && km->port_id != km1->port_id)
+ return 0;
+
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ /* using same extractor types in same sequence */
+ if (km->match_map[i]->extr_start_offs_id !=
+ km1->match_map[i]->extr_start_offs_id ||
+ km->match_map[i]->rel_offs != km1->match_map[i]->rel_offs ||
+ km->match_map[i]->extr != km1->match_map[i]->extr ||
+ km->match_map[i]->word_len != km1->match_map[i]->word_len) {
+ return 0;
+ }
+ }
+
+ if (km->target == KM_CAM) {
+ /* in CAM must exactly match on all masks */
+ for (int i = 0; i < km->key_word_size; i++)
+ if (km->entry_mask[i] != km1->entry_mask[i])
+ return 0;
+
+ /* Would be set later if not reusing from km1 */
+ km->cam_paired = km1->cam_paired;
+
+ } else if (km->target == KM_TCAM) {
+ /*
+ * If TCAM, we must make sure Recipe Key Mask does not
+ * mask out enable bits in masks
+ * Note: it is important that km1 is the original creator
+ * of the KM Recipe, since it contains its true masks
+ */
+ for (int i = 0; i < km->key_word_size; i++)
+ if ((km->entry_mask[i] & km1->entry_mask[i]) != km->entry_mask[i])
+ return 0;
+
+ km->tcam_start_bank = km1->tcam_start_bank;
+ km->tcam_record = -1; /* needs to be found later */
+
+ } else {
+ NT_LOG(DBG, FILTER, "ERROR - KM target not defined or supported");
+ return 0;
+ }
+
+ /*
+ * Check for a flow clash. If already programmed return with -1
+ */
+ int double_match = 1;
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ if ((km->entry_word[i] & km->entry_mask[i]) !=
+ (km1->entry_word[i] & km1->entry_mask[i])) {
+ double_match = 0;
+ break;
+ }
+ }
+
+ if (double_match)
+ return -1;
+
+ /*
+ * Note that TCAM and CAM may reuse same RCP and flow type
+ * when this happens, CAM entry wins on overlap
+ */
+
+ /* Use same KM Recipe and same flow type - return flow type */
+ return km1->flow_type;
+}
+
+int km_rcp_set(struct km_flow_def_s *km, int index)
+{
+ int qw = 0;
+ int sw = 0;
+ int swx = 0;
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PRESET_ALL, index, 0, 0);
+
+ /* set extractor words, offs, contrib */
+ for (int i = 0; i < km->num_ftype_elem; i++) {
+ switch (km->match_map[i]->extr) {
+ case KM_USE_EXTRACTOR_SWORD:
+ if (km->match_map[i]->extr_start_offs_id & SWX_INFO) {
+ if (km->target == KM_CAM && swx == 0) {
+ /* SWX */
+ if (km->match_map[i]->extr_start_offs_id == SB_VNI) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - VNI");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_MAC_PORT) {
+ NT_LOG(DBG, FILTER,
+ "Set KM SWX sel A - PTC + MAC");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else if (km->match_map[i]->extr_start_offs_id ==
+ SB_KCC_ID) {
+ NT_LOG(DBG, FILTER, "Set KM SWX sel A - KCC ID");
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_CCH, index,
+ 0, 1);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_SWX_SEL_A,
+ index, 0, SWX_SEL_ALL32);
+
+ } else {
+ return -1;
+ }
+
+ } else {
+ return -1;
+ }
+
+ swx++;
+
+ } else {
+ if (sw == 0) {
+ /* DW8 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW8_SEL_A, index, 0,
+ DW8_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW8 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else if (sw == 1) {
+ /* DW10 */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_DW10_SEL_A, index, 0,
+ DW10_SEL_FIRST32);
+ NT_LOG(DBG, FILTER,
+ "Set KM DW10 sel A: dyn: %i, offs: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs);
+
+ } else {
+ return -1;
+ }
+
+ sw++;
+ }
+
+ break;
+
+ case KM_USE_EXTRACTOR_QWORD:
+ if (qw == 0) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW0_SEL_A, index, 0,
+ QW0_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW0 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else if (qw == 1) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_DYN, index, 0,
+ km->match_map[i]->extr_start_offs_id);
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_OFS, index, 0,
+ km->match_map[i]->rel_offs);
+
+ switch (km->match_map[i]->word_len) {
+ case 1:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST32);
+ break;
+
+ case 2:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_FIRST64);
+ break;
+
+ case 4:
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_QW4_SEL_A, index, 0,
+ QW4_SEL_ALL128);
+ break;
+
+ default:
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER,
+ "Set KM QW4 sel A: dyn: %i, offs: %i, size: %i",
+ km->match_map[i]->extr_start_offs_id,
+ km->match_map[i]->rel_offs, km->match_map[i]->word_len);
+
+ } else {
+ return -1;
+ }
+
+ qw++;
+ break;
+
+ default:
+ return -1;
+ }
+ }
+
+ /* set mask A */
+ for (int i = 0; i < km->key_word_size; i++) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_MASK_A, index,
+ (km->be->km.nb_km_rcp_mask_a_word_size - 1) - i,
+ km->entry_mask[i]);
+ NT_LOG(DBG, FILTER, "Set KM mask A: %08x", km->entry_mask[i]);
+ }
+
+ if (km->target == KM_CAM) {
+ /* set info - Color */
+ if (km->info_set) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_INFO_A, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM info A");
+ }
+
+ /* set key length A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_EL_A, index, 0,
+ km->key_word_size + !!km->info_set - 1); /* select id is -1 */
+ /* set Flow Type for Key A */
+ NT_LOG(DBG, FILTER, "Set KM EL A: %i", km->key_word_size + !!km->info_set - 1);
+
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_FTM_A, index, 0, 1 << km->flow_type);
+
+ NT_LOG(DBG, FILTER, "Set KM FTM A - ft: %i", km->flow_type);
+
+ /* Set Paired - only on the CAM part though... TODO split CAM and TCAM */
+ if ((uint32_t)(km->key_word_size + !!km->info_set) >
+ km->be->km.nb_cam_record_words) {
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_PAIRED, index, 0, 1);
+ NT_LOG(DBG, FILTER, "Set KM CAM Paired");
+ km->cam_paired = 1;
+ }
+
+ } else if (km->target == KM_TCAM) {
+ uint32_t bank_bm = 0;
+
+ if (tcam_find_mapping(km) < 0) {
+ /* failed mapping into TCAM */
+ NT_LOG(DBG, FILTER, "INFO: TCAM mapping flow failed");
+ return -1;
+ }
+
+ assert((uint32_t)(km->tcam_start_bank + km->key_word_size) <=
+ km->be->km.nb_tcam_banks);
+
+ for (int i = 0; i < km->key_word_size; i++) {
+ bank_bm |=
+ (1 << (km->be->km.nb_tcam_banks - 1 - (km->tcam_start_bank + i)));
+ }
+
+ /* Set BANK_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_BANK_A, index, 0, bank_bm);
+ /* Set Kl_A */
+ hw_mod_km_rcp_set(km->be, HW_KM_RCP_KL_A, index, 0, km->key_word_size - 1);
+
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int cam_populate(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ km->entry_word[i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = km;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1,
+ km->entry_word[km->be->km.nb_cam_record_words + i]);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, km->flow_type);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = km;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+
+ return res;
+}
+
+static int cam_reset_entry(struct km_flow_def_s *km, int bank)
+{
+ int res = 0;
+ int cnt = km->key_word_size + !!km->info_set;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank, km->record_indexes[bank],
+ 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank, km->record_indexes[bank],
+ 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner = NULL;
+
+ if (cnt) {
+ assert(km->cam_paired);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_record_words && cnt; i++, cnt--) {
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_W0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ res |= hw_mod_km_cam_set(km->be, HW_KM_CAM_FT0 + i, bank,
+ km->record_indexes[bank] + 1, 0);
+ }
+
+ km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner = NULL;
+ }
+
+ res |= hw_mod_km_cam_flush(km->be, bank, km->record_indexes[bank], km->cam_paired ? 2 : 1);
+ return res;
+}
+
+static int move_cuckoo_index(struct km_flow_def_s *km)
+{
+ assert(km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner);
+
+ for (uint32_t bank = 0; bank < km->be->km.nb_cam_banks; bank++) {
+ /* It will not select itself */
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank)].km_owner == NULL) {
+ if (km->cam_paired) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(bank) + 1].km_owner != NULL)
+ continue;
+ }
+
+ /*
+ * Populate in new position
+ */
+ int res = cam_populate(km, bank);
+
+ if (res) {
+ NT_LOG(DBG, FILTER,
+ "Error: failed to write to KM CAM in cuckoo move");
+ return 0;
+ }
+
+ /*
+ * Reset/free entry in old bank
+ * HW flushes are really not needed, the old addresses are always taken
+ * over by the caller If you change this code in future updates, this may
+ * no longer be true then!
+ */
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = NULL;
+
+ if (km->cam_paired)
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER,
+ "KM Cuckoo hash moved from bank %i to bank %i (%04X => %04X)",
+ km->bank_used, bank, CAM_KM_DIST_IDX(km->bank_used),
+ CAM_KM_DIST_IDX(bank));
+ km->bank_used = bank;
+ (*km->cuckoo_moves)++;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx, int levels,
+ int cam_adr_list_len)
+{
+ struct km_flow_def_s *km = km_parent->cam_dist[bank_idx].km_owner;
+
+ assert(levels <= CUCKOO_MOVE_MAX_DEPTH);
+
+ /*
+ * Only move if same pairness
+ * Can be extended later to handle both move of paired and single entries
+ */
+ if (!km || km_parent->cam_paired != km->cam_paired)
+ return 0;
+
+ if (move_cuckoo_index(km))
+ return 1;
+
+ if (levels <= 1)
+ return 0;
+
+ assert(cam_adr_list_len < CUCKOO_MOVE_MAX_DEPTH);
+
+ cam_addr_reserved_stack[cam_adr_list_len++] = bank_idx;
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ int reserved = 0;
+ int new_idx = CAM_KM_DIST_IDX(i);
+
+ for (int i_reserved = 0; i_reserved < cam_adr_list_len; i_reserved++) {
+ if (cam_addr_reserved_stack[i_reserved] == new_idx) {
+ reserved = 1;
+ break;
+ }
+ }
+
+ if (reserved)
+ continue;
+
+ int res = move_cuckoo_index_level(km, new_idx, levels - 1, cam_adr_list_len);
+
+ if (res) {
+ if (move_cuckoo_index(km))
+ return 1;
+
+ assert(0);
+ }
+ }
+
+ return 0;
+}
+
+static int km_write_data_to_cam(struct km_flow_def_s *km)
+{
+ int res = 0;
+ assert(km->be->km.nb_cam_banks <= MAX_BANKS);
+ assert(km->cam_dist);
+
+ NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
+ km->record_indexes[1], km->record_indexes[2]);
+
+ if (km->info_set)
+ km->entry_word[km->key_word_size] = km->info; /* finally set info */
+
+ int bank = -1;
+
+ /*
+ * first step, see if any of the banks are free
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (km->cam_dist[CAM_KM_DIST_IDX(i_bank)].km_owner == NULL) {
+ if (km->cam_paired == 0 ||
+ km->cam_dist[CAM_KM_DIST_IDX(i_bank) + 1].km_owner == NULL) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0) {
+ /*
+ * Second step - cuckoo move existing flows if possible
+ */
+ for (uint32_t i_bank = 0; i_bank < km->be->km.nb_cam_banks; i_bank++) {
+ if (move_cuckoo_index_level(km, CAM_KM_DIST_IDX(i_bank), 4, 0)) {
+ bank = i_bank;
+ break;
+ }
+ }
+ }
+
+ if (bank < 0)
+ return -1;
+
+ /* populate CAM */
+ NT_LOG(DBG, FILTER, "KM Bank = %i (addr %04X)", bank, CAM_KM_DIST_IDX(bank));
+ res = cam_populate(km, bank);
+
+ if (res == 0) {
+ km->flushed_to_target = 1;
+ km->bank_used = bank;
+ }
+
+ return res;
+}
+
+/*
+ * TCAM
+ */
+static int tcam_find_free_record(struct km_flow_def_s *km, int start_bank)
+{
+ for (uint32_t rec = 0; rec < km->be->km.nb_tcam_bank_width; rec++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank, rec)].km_owner == NULL) {
+ int pass = 1;
+
+ for (int ii = 1; ii < km->key_word_size; ii++) {
+ if (km->tcam_dist[TCAM_DIST_IDX(start_bank + ii, rec)].km_owner !=
+ NULL) {
+ pass = 0;
+ break;
+ }
+ }
+
+ if (pass) {
+ km->tcam_record = rec;
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int tcam_find_mapping(struct km_flow_def_s *km)
+{
+ /* Search record and start index for this flow */
+ for (int bs_idx = 0; bs_idx < km->num_start_offsets; bs_idx++) {
+ if (tcam_find_free_record(km, km->start_offsets[bs_idx])) {
+ km->tcam_start_bank = km->start_offsets[bs_idx];
+ NT_LOG(DBG, FILTER, "Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static int tcam_write_word(struct km_flow_def_s *km, int bank, int record, uint32_t word,
+ uint32_t mask)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ uint8_t a = (uint8_t)((word >> (24 - (byte * 8))) & 0xff);
+ uint8_t a_m = (uint8_t)((mask >> (24 - (byte * 8))) & 0xff);
+ /* calculate important value bits */
+ a = a & a_m;
+
+ for (int val = 0; val < 256; val++) {
+ err |= hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if ((val & a_m) == a)
+ all_recs[rec_val] |= rec_bit;
+ else
+ all_recs[rec_val] &= ~rec_bit;
+
+ err |= hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ /* flush bank */
+ err |= hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+
+ if (err == 0) {
+ assert(km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner == NULL);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = km;
+ }
+
+ return err;
+}
+
+static int km_write_data_to_tcam(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_record < 0) {
+ tcam_find_free_record(km, km->tcam_start_bank);
+
+ if (km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ NT_LOG(DBG, FILTER, "Reused RCP: Found space in TCAM start bank %i, record %i",
+ km->tcam_start_bank, km->tcam_record);
+ }
+
+ /* Write KM_TCI */
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record,
+ km->info);
+ err |= hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record,
+ km->flow_type);
+ err |= hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++) {
+ err = tcam_write_word(km, km->tcam_start_bank + i, km->tcam_record,
+ km->entry_word[i], km->entry_mask[i]);
+ }
+
+ if (err == 0)
+ km->flushed_to_target = 1;
+
+ return err;
+}
+
+static int tcam_reset_bank(struct km_flow_def_s *km, int bank, int record)
+{
+ int err = 0;
+ uint32_t all_recs[3];
+
+ int rec_val = record / 32;
+ int rec_bit_shft = record % 32;
+ uint32_t rec_bit = (1 << rec_bit_shft);
+
+ assert((km->be->km.nb_tcam_bank_width + 31) / 32 <= 3);
+
+ for (int byte = 0; byte < 4; byte++) {
+ for (int val = 0; val < 256; val++) {
+ err = hw_mod_km_tcam_get(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+
+ all_recs[rec_val] &= ~rec_bit;
+ err = hw_mod_km_tcam_set(km->be, HW_KM_TCAM_T, bank, byte, val, all_recs);
+
+ if (err)
+ break;
+ }
+ }
+
+ if (err)
+ return err;
+
+ /* flush bank */
+ err = hw_mod_km_tcam_flush(km->be, bank, ALL_BANK_ENTRIES);
+ km->tcam_dist[TCAM_DIST_IDX(bank, record)].km_owner = NULL;
+
+ NT_LOG(DBG, FILTER, "Reset TCAM bank %i, rec_val %i rec bit %08x", bank, rec_val,
+ rec_bit);
+
+ return err;
+}
+
+static int tcam_reset_entry(struct km_flow_def_s *km)
+{
+ int err = 0;
+
+ if (km->tcam_start_bank < 0 || km->tcam_record < 0) {
+ NT_LOG(DBG, FILTER, "FAILED to find space in TCAM for flow");
+ return -1;
+ }
+
+ /* Write KM_TCI */
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_COLOR, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_set(km->be, HW_KM_TCI_FT, km->tcam_start_bank, km->tcam_record, 0);
+ hw_mod_km_tci_flush(km->be, km->tcam_start_bank, km->tcam_record, 1);
+
+ for (int i = 0; i < km->key_word_size && !err; i++)
+ err = tcam_reset_bank(km, km->tcam_start_bank + i, km->tcam_record);
+
+ return err;
+}
+
+int km_write_data_match_entry(struct km_flow_def_s *km, uint32_t color)
+{
+ int res = -1;
+
+ km->info = color;
+ NT_LOG(DBG, FILTER, "Write Data entry Color: %08x", color);
+
+ switch (km->target) {
+ case KM_CAM:
+ res = km_write_data_to_cam(km);
+ break;
+
+ case KM_TCAM:
+ res = km_write_data_to_tcam(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ break;
+ }
+
+ return res;
+}
+
+int km_clear_data_match_entry(struct km_flow_def_s *km)
+{
+ int res = 0;
+
+ if (km->root) {
+ struct km_flow_def_s *km1 = km->root;
+
+ while (km1->reference != km)
+ km1 = km1->reference;
+
+ km1->reference = km->reference;
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->reference) {
+ km->reference->root = NULL;
+
+ switch (km->target) {
+ case KM_CAM:
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used)].km_owner = km->reference;
+
+ if (km->key_word_size + !!km->info_set > 1) {
+ assert(km->cam_paired);
+ km->cam_dist[CAM_KM_DIST_IDX(km->bank_used) + 1].km_owner =
+ km->reference;
+ }
+
+ break;
+
+ case KM_TCAM:
+ for (int i = 0; i < km->key_word_size; i++) {
+ km->tcam_dist[TCAM_DIST_IDX(km->tcam_start_bank + i,
+ km->tcam_record)]
+ .km_owner = km->reference;
+ }
+
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+
+ } else if (km->flushed_to_target) {
+ switch (km->target) {
+ case KM_CAM:
+ res = cam_reset_entry(km, km->bank_used);
+ break;
+
+ case KM_TCAM:
+ res = tcam_reset_entry(km);
+ break;
+
+ case KM_SYNERGY:
+ default:
+ res = -1;
+ break;
+ }
+
+ km->flushed_to_target = 0;
+ km->bank_used = 0;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
index 532884ca01..b8a30671c3 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_km.c
@@ -165,6 +165,240 @@ int hw_mod_km_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
return be->iface->km_rcp_flush(be->be_dev, &be->km, start_idx, count);
}
+static int hw_mod_km_rcp_mod(struct flow_api_backend_s *be, enum hw_km_e field, int index,
+ int word_off, uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->km.nb_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.rcp[index], (uint8_t)*value, sizeof(struct km_v7_rcp_s));
+ break;
+
+ case HW_KM_RCP_QW0_DYN:
+ GET_SET(be->km.v7.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW0_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw0_sel_b, value);
+ break;
+
+ case HW_KM_RCP_QW4_DYN:
+ GET_SET(be->km.v7.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_KM_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_A:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_a, value);
+ break;
+
+ case HW_KM_RCP_QW4_SEL_B:
+ GET_SET(be->km.v7.rcp[index].qw4_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW8_DYN:
+ GET_SET(be->km.v7.rcp[index].dw8_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW8_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw8_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW8_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw8_sel_b, value);
+ break;
+
+ case HW_KM_RCP_DW10_DYN:
+ GET_SET(be->km.v7.rcp[index].dw10_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW10_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw10_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_A:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_a, value);
+ break;
+
+ case HW_KM_RCP_DW10_SEL_B:
+ GET_SET(be->km.v7.rcp[index].dw10_sel_b, value);
+ break;
+
+ case HW_KM_RCP_SWX_CCH:
+ GET_SET(be->km.v7.rcp[index].swx_cch, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_A:
+ GET_SET(be->km.v7.rcp[index].swx_sel_a, value);
+ break;
+
+ case HW_KM_RCP_SWX_SEL_B:
+ GET_SET(be->km.v7.rcp[index].swx_sel_b, value);
+ break;
+
+ case HW_KM_RCP_MASK_A:
+ if (word_off > KM_RCP_MASK_D_A_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_d_a[word_off], value);
+ break;
+
+ case HW_KM_RCP_MASK_B:
+ if (word_off > KM_RCP_MASK_B_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->km.v7.rcp[index].mask_b[word_off], value);
+ break;
+
+ case HW_KM_RCP_DUAL:
+ GET_SET(be->km.v7.rcp[index].dual, value);
+ break;
+
+ case HW_KM_RCP_PAIRED:
+ GET_SET(be->km.v7.rcp[index].paired, value);
+ break;
+
+ case HW_KM_RCP_EL_A:
+ GET_SET(be->km.v7.rcp[index].el_a, value);
+ break;
+
+ case HW_KM_RCP_EL_B:
+ GET_SET(be->km.v7.rcp[index].el_b, value);
+ break;
+
+ case HW_KM_RCP_INFO_A:
+ GET_SET(be->km.v7.rcp[index].info_a, value);
+ break;
+
+ case HW_KM_RCP_INFO_B:
+ GET_SET(be->km.v7.rcp[index].info_b, value);
+ break;
+
+ case HW_KM_RCP_FTM_A:
+ GET_SET(be->km.v7.rcp[index].ftm_a, value);
+ break;
+
+ case HW_KM_RCP_FTM_B:
+ GET_SET(be->km.v7.rcp[index].ftm_b, value);
+ break;
+
+ case HW_KM_RCP_BANK_A:
+ GET_SET(be->km.v7.rcp[index].bank_a, value);
+ break;
+
+ case HW_KM_RCP_BANK_B:
+ GET_SET(be->km.v7.rcp[index].bank_b, value);
+ break;
+
+ case HW_KM_RCP_KL_A:
+ GET_SET(be->km.v7.rcp[index].kl_a, value);
+ break;
+
+ case HW_KM_RCP_KL_B:
+ GET_SET(be->km.v7.rcp[index].kl_b, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_A:
+ GET_SET(be->km.v7.rcp[index].keyway_a, value);
+ break;
+
+ case HW_KM_RCP_KEYWAY_B:
+ GET_SET(be->km.v7.rcp[index].keyway_b, value);
+ break;
+
+ case HW_KM_RCP_SYNERGY_MODE:
+ GET_SET(be->km.v7.rcp[index].synergy_mode, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw0_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW0_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw0_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_DYN:
+ GET_SET(be->km.v7.rcp[index].dw2_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_DW2_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].dw2_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw4_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW4_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw4_b_ofs, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_DYN:
+ GET_SET(be->km.v7.rcp[index].sw5_b_dyn, value);
+ break;
+
+ case HW_KM_RCP_SW5_B_OFS:
+ GET_SET_SIGNED(be->km.v7.rcp[index].sw5_b_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_rcp_set(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, &value, 0);
+}
+
+int hw_mod_km_rcp_get(struct flow_api_backend_s *be, enum hw_km_e field, int index, int word_off,
+ uint32_t *value)
+{
+ return hw_mod_km_rcp_mod(be, field, index, word_off, value, 1);
+}
+
int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -180,6 +414,103 @@ int hw_mod_km_cam_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_cam_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_cam_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ if ((unsigned int)bank >= be->km.nb_cam_banks) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ if ((unsigned int)record >= be->km.nb_cam_records) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ unsigned int index = bank * be->km.nb_cam_records + record;
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_CAM_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->km.v7.cam[index], (uint8_t)*value, sizeof(struct km_v7_cam_s));
+ break;
+
+ case HW_KM_CAM_W0:
+ GET_SET(be->km.v7.cam[index].w0, value);
+ break;
+
+ case HW_KM_CAM_W1:
+ GET_SET(be->km.v7.cam[index].w1, value);
+ break;
+
+ case HW_KM_CAM_W2:
+ GET_SET(be->km.v7.cam[index].w2, value);
+ break;
+
+ case HW_KM_CAM_W3:
+ GET_SET(be->km.v7.cam[index].w3, value);
+ break;
+
+ case HW_KM_CAM_W4:
+ GET_SET(be->km.v7.cam[index].w4, value);
+ break;
+
+ case HW_KM_CAM_W5:
+ GET_SET(be->km.v7.cam[index].w5, value);
+ break;
+
+ case HW_KM_CAM_FT0:
+ GET_SET(be->km.v7.cam[index].ft0, value);
+ break;
+
+ case HW_KM_CAM_FT1:
+ GET_SET(be->km.v7.cam[index].ft1, value);
+ break;
+
+ case HW_KM_CAM_FT2:
+ GET_SET(be->km.v7.cam[index].ft2, value);
+ break;
+
+ case HW_KM_CAM_FT3:
+ GET_SET(be->km.v7.cam[index].ft3, value);
+ break;
+
+ case HW_KM_CAM_FT4:
+ GET_SET(be->km.v7.cam[index].ft4, value);
+ break;
+
+ case HW_KM_CAM_FT5:
+ GET_SET(be->km.v7.cam[index].ft5, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_cam_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_cam_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcam_flush(struct flow_api_backend_s *be, int start_bank, int count)
{
if (count == ALL_ENTRIES)
@@ -273,6 +604,12 @@ int hw_mod_km_tcam_set(struct flow_api_backend_s *be, enum hw_km_e field, int ba
return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 0);
}
+int hw_mod_km_tcam_get(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int byte,
+ int byte_val, uint32_t *value_set)
+{
+ return hw_mod_km_tcam_mod(be, field, bank, byte, byte_val, value_set, 1);
+}
+
int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
@@ -288,6 +625,49 @@ int hw_mod_km_tci_flush(struct flow_api_backend_s *be, int start_bank, int start
return be->iface->km_tci_flush(be->be_dev, &be->km, start_bank, start_record, count);
}
+static int hw_mod_km_tci_mod(struct flow_api_backend_s *be, enum hw_km_e field, int bank,
+ int record, uint32_t *value, int get)
+{
+ unsigned int index = bank * be->km.nb_tcam_bank_width + record;
+
+ if (index >= (be->km.nb_tcam_banks * be->km.nb_tcam_bank_width)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 7:
+ switch (field) {
+ case HW_KM_TCI_COLOR:
+ GET_SET(be->km.v7.tci[index].color, value);
+ break;
+
+ case HW_KM_TCI_FT:
+ GET_SET(be->km.v7.tci[index].ft, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 7 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_km_tci_set(struct flow_api_backend_s *be, enum hw_km_e field, int bank, int record,
+ uint32_t value)
+{
+ return hw_mod_km_tci_mod(be, field, bank, record, &value, 0);
+}
+
int hw_mod_km_tcq_flush(struct flow_api_backend_s *be, int start_bank, int start_record, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 5572662647..4737460cdf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -40,7 +40,19 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_km_rcp {
+ struct hw_db_inline_km_rcp_data data;
+ int ref;
+
+ struct hw_db_inline_resource_db_km_ft {
+ struct hw_db_inline_km_ft_data data;
+ int ref;
+ } *ft;
+ } *km;
+
uint32_t nb_cat;
+ uint32_t nb_km_ft;
+ uint32_t nb_km_rcp;
/* Hardware */
@@ -91,6 +103,25 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_km_ft = ndev->be.cat.nb_flow_types;
+ db->nb_km_rcp = ndev->be.km.nb_categories;
+ db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
+
+ if (db->km == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ db->km[i].ft = calloc(db->nb_km_ft * db->nb_cat,
+ sizeof(struct hw_db_inline_resource_db_km_ft));
+
+ if (db->km[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
*db_handle = db;
return 0;
}
@@ -104,6 +135,13 @@ void hw_db_inline_destroy(void *db_handle)
free(db->slc_lr);
free(db->cat);
+ if (db->km) {
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
+ free(db->km[i].ft);
+
+ free(db->km);
+ }
+
free(db->cfn);
free(db);
@@ -134,12 +172,61 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_KM_RCP:
+ hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
+ break;
+
default:
break;
}
}
}
+
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type != type)
+ continue;
+
+ switch (type) {
+ case HW_DB_IDX_TYPE_NONE:
+ return NULL;
+
+ case HW_DB_IDX_TYPE_CAT:
+ return &db->cat[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_QSL:
+ return &db->qsl[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_COT:
+ return &db->cot[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_SLC_LR:
+ return &db->slc_lr[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_KM_RCP:
+ return &db->km[idxs[i].id1].data;
+
+ case HW_DB_IDX_TYPE_KM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
+ default:
+ return NULL;
+ }
+ }
+
+ return NULL;
+}
+
/******************************************************************************/
/* Filter */
/******************************************************************************/
@@ -614,3 +701,150 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->cat[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* KM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_km_compare(const struct hw_db_inline_km_rcp_data *data1,
+ const struct hw_db_inline_km_rcp_data *data2)
+{
+ return data1->rcp == data2->rcp;
+}
+
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_km_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_RCP;
+
+ for (uint32_t i = 0; i < db->nb_km_rcp; ++i) {
+ if (!found && db->km[i].ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (db->km[i].ref > 0 && hw_db_inline_km_compare(data, &db->km[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->km[idx.id1].data, data, sizeof(struct hw_db_inline_km_rcp_data));
+ db->km[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->km[idx.id1].ref += 1;
+}
+
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
+{
+ (void)ndev;
+ (void)db_handle;
+
+ if (idx.error)
+ return;
+}
+
+/******************************************************************************/
+/* KM FT */
+/******************************************************************************/
+
+static int hw_db_inline_km_ft_compare(const struct hw_db_inline_km_ft_data *data1,
+ const struct hw_db_inline_km_ft_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[data->km.id1];
+ struct hw_db_km_ft idx = { .raw = 0 };
+ uint32_t cat_offset = data->cat.ids * db->nb_cat;
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_KM_FT;
+ idx.id2 = data->km.id1;
+ idx.id3 = data->cat.ids;
+
+ if (km_rcp->data.rcp == 0) {
+ idx.id1 = 0;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_km_ft; ++i) {
+ const struct hw_db_inline_resource_db_km_ft *km_ft = &km_rcp->ft[cat_offset + i];
+
+ if (!found && km_ft->ref <= 0) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (km_ft->ref > 0 && hw_db_inline_km_ft_compare(data, &km_ft->data)) {
+ idx.id1 = i;
+ hw_db_inline_km_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&km_rcp->ft[cat_offset + idx.id1].data, data,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error) {
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+ db->km[idx.id2].ft[cat_offset + idx.id1].ref += 1;
+ }
+}
+
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_km_rcp *km_rcp = &db->km[idx.id2];
+ uint32_t cat_offset = idx.id3 * db->nb_cat;
+
+ if (idx.error)
+ return;
+
+ km_rcp->ft[cat_offset + idx.id1].ref -= 1;
+
+ if (km_rcp->ft[cat_offset + idx.id1].ref <= 0) {
+ memset(&km_rcp->ft[cat_offset + idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_km_ft_data));
+ km_rcp->ft[cat_offset + idx.id1].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index d0435acaef..e104ba7327 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_action_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_cot_idx {
HW_DB_IDX;
};
@@ -48,12 +52,22 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_km_idx {
+ HW_DB_IDX;
+};
+
+struct hw_db_km_ft {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_KM_FT,
};
/* Functionality data types */
@@ -123,6 +137,16 @@ struct hw_db_inline_action_set_data {
};
};
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -130,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle);
void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_idx *idxs,
uint32_t size);
+const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
@@ -158,6 +184,18 @@ void hw_db_inline_cat_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
/**/
+struct hw_db_km_idx hw_db_inline_km_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_rcp_data *data);
+void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx);
+
+struct hw_db_km_ft hw_db_inline_km_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_km_ft_data *data);
+void hw_db_inline_km_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_ft idx);
+
+/**/
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a5b15bc281..bf6cbcf37d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2334,6 +2334,23 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
+ const bool empty_pattern = fd_has_empty_pattern(fd);
+
+ /* Setup COT */
+ struct hw_db_inline_cot_data cot_data = {
+ .matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
+ .frag_rcp = 0,
+ };
+ struct hw_db_cot_idx cot_idx =
+ hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
+ local_idxs[(*local_idx_counter)++] = cot_idx.raw;
+
+ if (cot_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference COT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Finalize QSL */
struct hw_db_qsl_idx qsl_idx =
hw_db_inline_qsl_add(dev->ndev, dev->ndev->hw_db_handle, qsl_data);
@@ -2428,6 +2445,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/*
* Flow for group 0
*/
+ int identical_km_entry_ft = -1;
+
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -2502,6 +2521,130 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ /* Setup KM RCP */
+ struct hw_db_inline_km_rcp_data km_rcp_data = { .rcp = 0 };
+
+ if (fd->km.num_ftype_elem) {
+ struct flow_handle *flow = dev->ndev->flow_base, *found_flow = NULL;
+
+ if (km_key_create(&fd->km, fh->port_id)) {
+ NT_LOG(ERR, FILTER, "KM creation failed");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.be = &dev->ndev->be;
+
+ /* Look for existing KM RCPs */
+ while (flow) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW &&
+ flow->fd->km.flow_type) {
+ int res = km_key_compare(&fd->km, &flow->fd->km);
+
+ if (res < 0) {
+ /* Flow rcp and match data is identical */
+ identical_km_entry_ft = flow->fd->km.flow_type;
+ found_flow = flow;
+ break;
+ }
+
+ if (res > 0) {
+ /* Flow rcp found and match data is different */
+ found_flow = flow;
+ }
+ }
+
+ flow = flow->next;
+ }
+
+ km_attach_ndev_resource_management(&fd->km, &dev->ndev->km_res_handle);
+
+ if (found_flow != NULL) {
+ /* Reuse existing KM RCP */
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)
+ found_flow->flm_db_idxs,
+ found_flow->flm_db_idx_counter);
+
+ if (other_km_rcp_data == NULL ||
+ flow_nic_ref_resource(dev->ndev, RES_KM_CATEGORY,
+ other_km_rcp_data->rcp)) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference existing KM RCP resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_data.rcp = other_km_rcp_data->rcp;
+ } else {
+ /* Alloc new KM RCP */
+ int rcp = flow_nic_alloc_resource(dev->ndev, RES_KM_CATEGORY, 1);
+
+ if (rcp < 0) {
+ NT_LOG(ERR, FILTER,
+ "Could not reference KM RCP resource (flow_nic_alloc)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ km_rcp_set(&fd->km, rcp);
+ km_rcp_data.rcp = (uint32_t)rcp;
+ }
+ }
+
+ struct hw_db_km_idx km_idx =
+ hw_db_inline_km_add(dev->ndev, dev->ndev->hw_db_handle, &km_rcp_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = km_idx.raw;
+
+ if (km_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM RCP resource (db_inline)");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Setup KM FT */
+ struct hw_db_inline_km_ft_data km_ft_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ };
+ struct hw_db_km_ft km_ft_idx =
+ hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = km_ft_idx.raw;
+
+ if (km_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference KM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ /* Finalize KM RCP */
+ if (fd->km.num_ftype_elem) {
+ if (identical_km_entry_ft >= 0 && identical_km_entry_ft != km_ft_idx.id1) {
+ NT_LOG(ERR, FILTER,
+ "Identical KM matches cannot have different KM FTs");
+ flow_nic_set_error(ERR_MATCH_FAILED_BY_HW_LIMITS, error);
+ goto error_out;
+ }
+
+ fd->km.flow_type = km_ft_idx.id1;
+
+ if (fd->km.target == KM_CAM) {
+ uint32_t ft_a_mask = 0;
+ hw_mod_km_rcp_get(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0, &ft_a_mask);
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_FTM_A,
+ (int)km_rcp_data.rcp, 0,
+ ft_a_mask | (1 << fd->km.flow_type));
+ }
+
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)km_rcp_data.rcp, 1);
+
+ km_write_data_match_entry(&fd->km, 0);
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -2782,6 +2925,25 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
} else {
NT_LOG(DBG, FILTER, "removing flow :%p", fh);
+ if (fh->fd->km.num_ftype_elem) {
+ km_clear_data_match_entry(&fh->fd->km);
+
+ const struct hw_db_inline_km_rcp_data *other_km_rcp_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_KM_RCP,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ if (other_km_rcp_data != NULL &&
+ flow_nic_deref_resource(dev->ndev, RES_KM_CATEGORY,
+ (int)other_km_rcp_data->rcp) == 0) {
+ hw_mod_km_rcp_set(&dev->ndev->be, HW_KM_RCP_PRESET_ALL,
+ (int)other_km_rcp_data->rcp, 0, 0);
+ hw_mod_km_rcp_flush(&dev->ndev->be, (int)other_km_rcp_data->rcp,
+ 1);
+ }
+ }
+
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 31/80] net/ntnic: add hash API
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (29 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 30/80] net/ntnic: add KM module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 32/80] net/ntnic: add TPE module Serhii Iliushyk
` (48 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Hasher module calculates a configurable hash value
to be used internally by the FPGA.
The module support both Toeplitz and NT-hash.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 2 +
drivers/net/ntnic/include/flow_api.h | 40 +
drivers/net/ntnic/include/flow_api_engine.h | 17 +
drivers/net/ntnic/include/hw_mod_backend.h | 20 +
.../ntnic/include/stream_binary_flow_api.h | 25 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 212 +++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.c | 156 ++++
drivers/net/ntnic/nthw/flow_api/flow_hasher.h | 21 +
drivers/net/ntnic/nthw/flow_api/flow_km.c | 25 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c | 179 ++++
.../profile_inline/flow_api_hw_db_inline.c | 142 +++
.../profile_inline/flow_api_hw_db_inline.h | 11 +
.../profile_inline/flow_api_profile_inline.c | 846 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 4 +
drivers/net/ntnic/ntnic_mod_reg.h | 4 +
16 files changed, 1704 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/flow_hasher.h
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index ed306e05b5..f2cb7a362a 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -54,6 +54,8 @@ Features
- TX VLAN insertion via raw encap.
- CAM and TCAM based matching.
- Exact match of 140 million flows and policies.
+- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
+ verification.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index edffd0a57a..2e96fa5bed 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -29,6 +29,37 @@ struct hw_mod_resource_s {
*/
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev);
+/**
+ * A structure used to configure the Receive Side Scaling (RSS) feature
+ * of an Ethernet port.
+ */
+struct nt_eth_rss_conf {
+ /**
+ * In rte_eth_dev_rss_hash_conf_get(), the *rss_key_len* should be
+ * greater than or equal to the *hash_key_size* which get from
+ * rte_eth_dev_info_get() API. And the *rss_key* should contain at least
+ * *hash_key_size* bytes. If not meet these requirements, the query
+ * result is unreliable even if the operation returns success.
+ *
+ * In rte_eth_dev_rss_hash_update() or rte_eth_dev_configure(), if
+ * *rss_key* is not NULL, the *rss_key_len* indicates the length of the
+ * *rss_key* in bytes and it should be equal to *hash_key_size*.
+ * If *rss_key* is NULL, drivers are free to use a random or a default key.
+ */
+ uint8_t rss_key[MAX_RSS_KEY_LEN];
+ /**
+ * Indicates the type of packets or the specific part of packets to
+ * which RSS hashing is to be applied.
+ */
+ uint64_t rss_hf;
+ /**
+ * Hash algorithm.
+ */
+ enum rte_eth_hash_function algorithm;
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask);
+
struct flow_eth_dev {
/* NIC that owns this port device */
struct flow_nic_dev *ndev;
@@ -49,6 +80,11 @@ struct flow_eth_dev {
struct flow_eth_dev *next;
};
+enum flow_nic_hash_e {
+ HASH_ALGO_ROUND_ROBIN = 0,
+ HASH_ALGO_5TUPLE,
+};
+
/* registered NIC backends */
struct flow_nic_dev {
uint8_t adapter_no; /* physical adapter no in the host system */
@@ -191,4 +227,8 @@ void flow_nic_free_resource(struct flow_nic_dev *ndev, enum res_type_e res_type,
int flow_nic_ref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
int flow_nic_deref_resource(struct flow_nic_dev *ndev, enum res_type_e res_type, int index);
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm);
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index a0f02f4e8a..e52363f04e 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -129,6 +129,7 @@ struct km_flow_def_s {
int bank_used;
uint32_t *cuckoo_moves; /* for CAM statistics only */
struct cam_distrib_s *cam_dist;
+ struct hasher_s *hsh;
/* TCAM specific bank management */
struct tcam_distrib_s *tcam_dist;
@@ -136,6 +137,17 @@ struct km_flow_def_s {
int tcam_record;
};
+/*
+ * RSS configuration, see struct rte_flow_action_rss
+ */
+struct hsh_def_s {
+ enum rte_eth_hash_function func; /* RSS hash function to apply */
+ /* RSS hash types, see definition of RTE_ETH_RSS_* for hash calculation options */
+ uint64_t types;
+ uint32_t key_len; /* Hash key length in bytes. */
+ const uint8_t *key; /* Hash key. */
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -247,6 +259,11 @@ struct nic_flow_def {
* Key Matcher flow definitions
*/
struct km_flow_def_s km;
+
+ /*
+ * Hash module RSS definitions
+ */
+ struct hsh_def_s hsh;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 26903f2183..cee148807a 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -149,14 +149,27 @@ enum km_flm_if_select_e {
int debug
enum frame_offs_e {
+ DYN_SOF = 0,
DYN_L2 = 1,
DYN_FIRST_VLAN = 2,
+ DYN_MPLS = 3,
DYN_L3 = 4,
+ DYN_ID_IPV4_6 = 5,
+ DYN_FINAL_IP_DST = 6,
DYN_L4 = 7,
DYN_L4_PAYLOAD = 8,
+ DYN_TUN_PAYLOAD = 9,
+ DYN_TUN_L2 = 10,
+ DYN_TUN_VLAN = 11,
+ DYN_TUN_MPLS = 12,
DYN_TUN_L3 = 13,
+ DYN_TUN_ID_IPV4_6 = 14,
+ DYN_TUN_FINAL_IP_DST = 15,
DYN_TUN_L4 = 16,
DYN_TUN_L4_PAYLOAD = 17,
+ DYN_EOF = 18,
+ DYN_L3_PAYLOAD_END = 19,
+ DYN_TUN_L3_PAYLOAD_END = 20,
SB_VNI = SWX_INFO | 1,
SB_MAC_PORT = SWX_INFO | 2,
SB_KCC_ID = SWX_INFO | 3
@@ -227,6 +240,11 @@ enum {
};
+enum {
+ HASH_HASH_NONE = 0,
+ HASH_5TUPLE = 8,
+};
+
enum {
CPY_SELECT_DSCP_IPV4 = 0,
CPY_SELECT_DSCP_IPV6 = 1,
@@ -670,6 +688,8 @@ int hw_mod_hsh_alloc(struct flow_api_backend_s *be);
void hw_mod_hsh_free(struct flow_api_backend_s *be);
int hw_mod_hsh_reset(struct flow_api_backend_s *be);
int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value);
struct qsl_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index 8097518d61..e5fe686d99 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -12,6 +12,31 @@
/* Max RSS hash key length in bytes */
#define MAX_RSS_KEY_LEN 40
+/* NT specific MASKs for RSS configuration */
+/* NOTE: Masks are required for correct RSS configuration, do not modify them! */
+#define NT_ETH_RSS_IPV4_MASK \
+ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+
+#define NT_ETH_RSS_IPV6_MASK \
+ (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+
+#define NT_ETH_RSS_IP_MASK \
+ (NT_ETH_RSS_IPV4_MASK | NT_ETH_RSS_IPV6_MASK | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+/* List of all RSS flags supported for RSS calculation offload */
+#define NT_ETH_RSS_OFFLOAD_MASK \
+ (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | \
+ RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY | \
+ RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_LEVEL_MASK | \
+ RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM | RTE_ETH_RSS_PORT | RTE_ETH_RSS_GTPU)
+
/*
* Flow frontend for binary programming interface
*/
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e1fef37ccb..d7e6d05556 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -56,6 +56,7 @@ sources = files(
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
'nthw/flow_api/flow_filter.c',
+ 'nthw/flow_api/flow_hasher.c',
'nthw/flow_api/flow_kcc.c',
'nthw/flow_api/flow_km.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 22d7905c62..577b1c83b5 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,8 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "ntlog.h"
+#include "nt_util.h"
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
@@ -10,6 +12,11 @@
#include "flow_api.h"
#include "flow_filter.h"
+#define RSS_TO_STRING(name) \
+ { \
+ name, #name \
+ }
+
const char *dbg_res_descr[] = {
/* RES_QUEUE */ "RES_QUEUE",
/* RES_CAT_CFN */ "RES_CAT_CFN",
@@ -773,6 +780,211 @@ void *flow_api_get_be_dev(struct flow_nic_dev *ndev)
return ndev->be.be_dev;
}
+/* Information for a given RSS type. */
+struct rss_type_info {
+ uint64_t rss_type;
+ const char *str;
+};
+
+static struct rss_type_info rss_to_string[] = {
+ /* RTE_BIT64(2) IPv4 dst + IPv4 src */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4),
+ /* RTE_BIT64(3) IPv4 dst + IPv4 src + Identification of group of fragments */
+ RSS_TO_STRING(RTE_ETH_RSS_FRAG_IPV4),
+ /* RTE_BIT64(4) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_TCP),
+ /* RTE_BIT64(5) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_UDP),
+ /* RTE_BIT64(6) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_SCTP),
+ /* RTE_BIT64(7) IPv4 dst + IPv4 src + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_NONFRAG_IPV4_OTHER),
+ /*
+ * RTE_BIT64(14) 128-bits of L2 payload starting after src MAC, i.e. including optional
+ * VLAN tag and ethertype. Overrides all L3 and L4 flags at the same level, but inner
+ * L2 payload can be combined with outer S-VLAN and GTPU TEID flags.
+ */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_PAYLOAD),
+ /* RTE_BIT64(18) L4 dst + L4 src + L4 protocol - see comment of RTE_ETH_RSS_L4_CHKSUM */
+ RSS_TO_STRING(RTE_ETH_RSS_PORT),
+ /* RTE_BIT64(19) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_VXLAN),
+ /* RTE_BIT64(20) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_GENEVE),
+ /* RTE_BIT64(21) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_NVGRE),
+ /* RTE_BIT64(23) GTP TEID - always from outer GTPU header */
+ RSS_TO_STRING(RTE_ETH_RSS_GTPU),
+ /* RTE_BIT64(24) MAC dst + MAC src */
+ RSS_TO_STRING(RTE_ETH_RSS_ETH),
+ /* RTE_BIT64(25) outermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_S_VLAN),
+ /* RTE_BIT64(26) innermost VLAN ID + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_C_VLAN),
+ /* RTE_BIT64(27) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ESP),
+ /* RTE_BIT64(28) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_AH),
+ /* RTE_BIT64(29) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV3),
+ /* RTE_BIT64(30) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PFCP),
+ /* RTE_BIT64(31) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_PPPOE),
+ /* RTE_BIT64(32) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_ECPRI),
+ /* RTE_BIT64(33) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_MPLS),
+ /* RTE_BIT64(34) IPv4 Header checksum + L4 protocol */
+ RSS_TO_STRING(RTE_ETH_RSS_IPV4_CHKSUM),
+
+ /*
+ * if combined with RTE_ETH_RSS_NONFRAG_IPV4_[TCP|UDP|SCTP] then
+ * L4 protocol + chosen protocol header Checksum
+ * else
+ * error
+ */
+ /* RTE_BIT64(35) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_CHKSUM),
+#ifndef ANDROMEDA_DPDK_21_11
+ /* RTE_BIT64(36) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L2TPV2),
+#endif
+
+ { RTE_BIT64(37), "unknown_RTE_BIT64(37)" },
+ { RTE_BIT64(38), "unknown_RTE_BIT64(38)" },
+ { RTE_BIT64(39), "unknown_RTE_BIT64(39)" },
+ { RTE_BIT64(40), "unknown_RTE_BIT64(40)" },
+ { RTE_BIT64(41), "unknown_RTE_BIT64(41)" },
+ { RTE_BIT64(42), "unknown_RTE_BIT64(42)" },
+ { RTE_BIT64(43), "unknown_RTE_BIT64(43)" },
+ { RTE_BIT64(44), "unknown_RTE_BIT64(44)" },
+ { RTE_BIT64(45), "unknown_RTE_BIT64(45)" },
+ { RTE_BIT64(46), "unknown_RTE_BIT64(46)" },
+ { RTE_BIT64(47), "unknown_RTE_BIT64(47)" },
+ { RTE_BIT64(48), "unknown_RTE_BIT64(48)" },
+ { RTE_BIT64(49), "unknown_RTE_BIT64(49)" },
+
+ /* RTE_BIT64(50) outermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_OUTERMOST),
+ /* RTE_BIT64(51) innermost encapsulation */
+ RSS_TO_STRING(RTE_ETH_RSS_LEVEL_INNERMOST),
+
+ /* RTE_BIT64(52) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE96),
+ /* RTE_BIT64(53) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE64),
+ /* RTE_BIT64(54) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE56),
+ /* RTE_BIT64(55) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE48),
+ /* RTE_BIT64(56) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE40),
+ /* RTE_BIT64(57) Not supported */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_PRE32),
+
+ /* RTE_BIT64(58) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_DST_ONLY),
+ /* RTE_BIT64(59) */
+ RSS_TO_STRING(RTE_ETH_RSS_L2_SRC_ONLY),
+ /* RTE_BIT64(60) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_DST_ONLY),
+ /* RTE_BIT64(61) */
+ RSS_TO_STRING(RTE_ETH_RSS_L4_SRC_ONLY),
+ /* RTE_BIT64(62) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_DST_ONLY),
+ /* RTE_BIT64(63) */
+ RSS_TO_STRING(RTE_ETH_RSS_L3_SRC_ONLY),
+};
+
+int sprint_nt_rss_mask(char *str, uint16_t str_len, const char *prefix, uint64_t hash_mask)
+{
+ if (str == NULL || str_len == 0)
+ return -1;
+
+ memset(str, 0x0, str_len);
+ uint16_t str_end = 0;
+ const struct rss_type_info *start = rss_to_string;
+
+ for (const struct rss_type_info *p = start; p != start + ARRAY_SIZE(rss_to_string); ++p) {
+ if (p->rss_type & hash_mask) {
+ if (strlen(prefix) + strlen(p->str) < (size_t)(str_len - str_end)) {
+ snprintf(str + str_end, str_len - str_end, "%s", prefix);
+ str_end += strlen(prefix);
+ snprintf(str + str_end, str_len - str_end, "%s", p->str);
+ str_end += strlen(p->str);
+
+ } else {
+ return -1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Hash
+ */
+
+int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_hash_e algorithm)
+{
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ switch (algorithm) {
+ case HASH_ALGO_5TUPLE:
+ /* need to create an IPv6 hashing and enable the adaptive ip mask bit */
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW0_OFS, hsh_idx, 0, -16);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_PE, hsh_idx, 0, DYN_FINAL_IP_DST);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_QW4_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_PE, hsh_idx, 0, DYN_L4);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W8_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_PE, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_OFS, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_W9_P, hsh_idx, 0, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 1, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 2, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 3, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 4, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 5, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 6, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 7, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 8, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx, 9, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0, 0xffffffff);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_VALID, hsh_idx, 0, 1);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_HSH_TYPE, hsh_idx, 0, HASH_5TUPLE);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0, 1);
+
+ NT_LOG(DBG, FILTER, "Set IPv6 5-tuple hasher with adaptive IPv4 hashing");
+ break;
+
+ default:
+ case HASH_ALGO_ROUND_ROBIN:
+ /* zero is round-robin */
+ break;
+ }
+
+ return 0;
+}
+
+int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.c b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
new file mode 100644
index 0000000000..86dfc16e79
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.c
@@ -0,0 +1,156 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <math.h>
+
+#include "flow_hasher.h"
+
+static uint32_t shuffle(uint32_t x)
+{
+ return ((x & 0x00000002) << 29) | ((x & 0xAAAAAAA8) >> 3) | ((x & 0x15555555) << 3) |
+ ((x & 0x40000000) >> 29);
+}
+
+static uint32_t ror_inv(uint32_t x, const int s)
+{
+ return (x >> s) | ((~x) << (32 - s));
+}
+
+static uint32_t combine(uint32_t x, uint32_t y)
+{
+ uint32_t x1 = ror_inv(x, 15);
+ uint32_t x2 = ror_inv(x, 13);
+ uint32_t y1 = ror_inv(y, 3);
+ uint32_t y2 = ror_inv(y, 27);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint32_t mix(uint32_t x, uint32_t y)
+{
+ return shuffle(combine(x, y));
+}
+
+static uint64_t ror_inv3(uint64_t x)
+{
+ const uint64_t m = 0xE0000000E0000000ULL;
+
+ return ((x >> 3) | m) ^ ((x << 29) & m);
+}
+
+static uint64_t ror_inv13(uint64_t x)
+{
+ const uint64_t m = 0xFFF80000FFF80000ULL;
+
+ return ((x >> 13) | m) ^ ((x << 19) & m);
+}
+
+static uint64_t ror_inv15(uint64_t x)
+{
+ const uint64_t m = 0xFFFE0000FFFE0000ULL;
+
+ return ((x >> 15) | m) ^ ((x << 17) & m);
+}
+
+static uint64_t ror_inv27(uint64_t x)
+{
+ const uint64_t m = 0xFFFFFFE0FFFFFFE0ULL;
+
+ return ((x >> 27) | m) ^ ((x << 5) & m);
+}
+
+static uint64_t shuffle64(uint64_t x)
+{
+ return ((x & 0x0000000200000002) << 29) | ((x & 0xAAAAAAA8AAAAAAA8) >> 3) |
+ ((x & 0x1555555515555555) << 3) | ((x & 0x4000000040000000) >> 29);
+}
+
+static uint64_t pair(uint32_t x, uint32_t y)
+{
+ return ((uint64_t)x << 32) | y;
+}
+
+static uint64_t combine64(uint64_t x, uint64_t y)
+{
+ uint64_t x1 = ror_inv15(x);
+ uint64_t x2 = ror_inv13(x);
+ uint64_t y1 = ror_inv3(y);
+ uint64_t y2 = ror_inv27(y);
+
+ return x ^ y ^
+ ((x1 & y1 & ~x2 & ~y2) | (x1 & ~y1 & x2 & ~y2) | (x1 & ~y1 & ~x2 & y2) |
+ (~x1 & y1 & x2 & ~y2) | (~x1 & y1 & ~x2 & y2) | (~x1 & ~y1 & x2 & y2));
+}
+
+static uint64_t mix64(uint64_t x, uint64_t y)
+{
+ return shuffle64(combine64(x, y));
+}
+
+static uint32_t calc16(const uint32_t key[16])
+{
+ /*
+ * 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Layer 0
+ * \./ \./ \./ \./ \./ \./ \./ \./
+ * 0 1 2 3 4 5 6 7 Layer 1
+ * \__.__/ \__.__/ \__.__/ \__.__/
+ * 0 1 2 3 Layer 2
+ * \______.______/ \______.______/
+ * 0 1 Layer 3
+ * \______________.______________/
+ * 0 Layer 4
+ * / \
+ * \./
+ * 0 Layer 5
+ * / \
+ * \./ Layer 6
+ * value
+ */
+
+ uint64_t z;
+ uint32_t x;
+
+ z = mix64(mix64(mix64(pair(key[0], key[8]), pair(key[1], key[9])),
+ mix64(pair(key[2], key[10]), pair(key[3], key[11]))),
+ mix64(mix64(pair(key[4], key[12]), pair(key[5], key[13])),
+ mix64(pair(key[6], key[14]), pair(key[7], key[15]))));
+
+ x = mix((uint32_t)(z >> 32), (uint32_t)z);
+ x = mix(x, ror_inv(x, 17));
+ x = combine(x, ror_inv(x, 17));
+
+ return x;
+}
+
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result)
+{
+ uint64_t val;
+ uint32_t res;
+
+ val = calc16(key);
+ res = (uint32_t)val;
+
+ if (hsh->cam_bw > 32)
+ val = (val << (hsh->cam_bw - 32)) ^ val;
+
+ for (int i = 0; i < hsh->banks; i++) {
+ result[i] = (unsigned int)(val & hsh->cam_records_bw_mask);
+ val = val >> hsh->cam_records_bw;
+ }
+
+ return res;
+}
+
+int init_hasher(struct hasher_s *hsh, int banks, int nb_records)
+{
+ hsh->banks = banks;
+ hsh->cam_records_bw = (int)(log2(nb_records - 1) + 1);
+ hsh->cam_records_bw_mask = (1U << hsh->cam_records_bw) - 1;
+ hsh->cam_bw = hsh->banks * hsh->cam_records_bw;
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_hasher.h b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
new file mode 100644
index 0000000000..15de8e9933
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/flow_hasher.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_HASHER_H_
+#define _FLOW_HASHER_H_
+
+#include <stdint.h>
+
+struct hasher_s {
+ int banks;
+ int cam_records_bw;
+ uint32_t cam_records_bw_mask;
+ int cam_bw;
+};
+
+int init_hasher(struct hasher_s *hsh, int _banks, int nb_records);
+uint32_t gethash(struct hasher_s *hsh, const uint32_t key[16], int *result);
+
+#endif /* _FLOW_HASHER_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_km.c b/drivers/net/ntnic/nthw/flow_api/flow_km.c
index 30d6ea728e..f79919cb81 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_km.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_km.c
@@ -9,6 +9,7 @@
#include "hw_mod_backend.h"
#include "flow_api_engine.h"
#include "nt_util.h"
+#include "flow_hasher.h"
#define MAX_QWORDS 2
#define MAX_SWORDS 2
@@ -75,10 +76,25 @@ static int tcam_find_mapping(struct km_flow_def_s *km);
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle)
{
+ /*
+ * KM entries occupied in CAM - to manage the cuckoo shuffling
+ * and manage CAM population and usage
+ * KM entries occupied in TCAM - to manage population and usage
+ */
+ if (!*handle) {
+ *handle = calloc(1,
+ (size_t)CAM_ENTRIES + sizeof(uint32_t) + (size_t)TCAM_ENTRIES +
+ sizeof(struct hasher_s));
+ NT_LOG(DBG, FILTER, "Allocate NIC DEV CAM and TCAM record manager");
+ }
+
km->cam_dist = (struct cam_distrib_s *)*handle;
km->cuckoo_moves = (uint32_t *)((char *)km->cam_dist + CAM_ENTRIES);
km->tcam_dist =
(struct tcam_distrib_s *)((char *)km->cam_dist + CAM_ENTRIES + sizeof(uint32_t));
+
+ km->hsh = (struct hasher_s *)((char *)km->tcam_dist + TCAM_ENTRIES);
+ init_hasher(km->hsh, km->be->km.nb_cam_banks, km->be->km.nb_cam_records);
}
void km_free_ndev_resource_management(void **handle)
@@ -839,9 +855,18 @@ static int move_cuckoo_index_level(struct km_flow_def_s *km_parent, int bank_idx
static int km_write_data_to_cam(struct km_flow_def_s *km)
{
int res = 0;
+ int val[MAX_BANKS];
assert(km->be->km.nb_cam_banks <= MAX_BANKS);
assert(km->cam_dist);
+ /* word list without info set */
+ gethash(km->hsh, km->entry_word, val);
+
+ for (uint32_t i = 0; i < km->be->km.nb_cam_banks; i++) {
+ /* if paired we start always on an even address - reset bit 0 */
+ km->record_indexes[i] = (km->cam_paired) ? val[i] & ~1 : val[i];
+ }
+
NT_LOG(DBG, FILTER, "KM HASH [%03X, %03X, %03X]", km->record_indexes[0],
km->record_indexes[1], km->record_indexes[2]);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
index df5c00ac42..1750d09afb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_hsh.c
@@ -89,3 +89,182 @@ int hw_mod_hsh_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count
return be->iface->hsh_rcp_flush(be->be_dev, &be->hsh, start_idx, count);
}
+
+static int hw_mod_hsh_rcp_mod(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t *value, int get)
+{
+ if (index >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 5:
+ switch (field) {
+ case HW_HSH_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->hsh.v5.rcp[index], (uint8_t)*value,
+ sizeof(struct hsh_v5_rcp_s));
+ break;
+
+ case HW_HSH_RCP_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off);
+ break;
+
+ case HW_HSH_RCP_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if ((unsigned int)word_off >= be->hsh.nb_rcp) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->hsh.v5.rcp, struct hsh_v5_rcp_s, index, word_off,
+ be->hsh.nb_rcp);
+ break;
+
+ case HW_HSH_RCP_LOAD_DIST_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].load_dist_type, value);
+ break;
+
+ case HW_HSH_RCP_MAC_PORT_MASK:
+ if (word_off > HSH_RCP_MAC_PORT_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].mac_port_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SORT:
+ GET_SET(be->hsh.v5.rcp[index].sort, value);
+ break;
+
+ case HW_HSH_RCP_QW0_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw0_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW0_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_HSH_RCP_QW4_PE:
+ GET_SET(be->hsh.v5.rcp[index].qw4_pe, value);
+ break;
+
+ case HW_HSH_RCP_QW4_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_PE:
+ GET_SET(be->hsh.v5.rcp[index].w8_pe, value);
+ break;
+
+ case HW_HSH_RCP_W8_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w8_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W8_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w8_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_PE:
+ GET_SET(be->hsh.v5.rcp[index].w9_pe, value);
+ break;
+
+ case HW_HSH_RCP_W9_OFS:
+ GET_SET_SIGNED(be->hsh.v5.rcp[index].w9_ofs, value);
+ break;
+
+ case HW_HSH_RCP_W9_SORT:
+ GET_SET(be->hsh.v5.rcp[index].w9_sort, value);
+ break;
+
+ case HW_HSH_RCP_W9_P:
+ GET_SET(be->hsh.v5.rcp[index].w9_p, value);
+ break;
+
+ case HW_HSH_RCP_P_MASK:
+ GET_SET(be->hsh.v5.rcp[index].p_mask, value);
+ break;
+
+ case HW_HSH_RCP_WORD_MASK:
+ if (word_off > HSH_RCP_WORD_MASK_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].word_mask[word_off], value);
+ break;
+
+ case HW_HSH_RCP_SEED:
+ GET_SET(be->hsh.v5.rcp[index].seed, value);
+ break;
+
+ case HW_HSH_RCP_TNL_P:
+ GET_SET(be->hsh.v5.rcp[index].tnl_p, value);
+ break;
+
+ case HW_HSH_RCP_HSH_VALID:
+ GET_SET(be->hsh.v5.rcp[index].hsh_valid, value);
+ break;
+
+ case HW_HSH_RCP_HSH_TYPE:
+ GET_SET(be->hsh.v5.rcp[index].hsh_type, value);
+ break;
+
+ case HW_HSH_RCP_TOEPLITZ:
+ GET_SET(be->hsh.v5.rcp[index].toeplitz, value);
+ break;
+
+ case HW_HSH_RCP_K:
+ if (word_off > HSH_RCP_KEY_SIZE) {
+ WORD_OFF_TOO_LARGE_LOG;
+ return WORD_OFF_TOO_LARGE;
+ }
+
+ GET_SET(be->hsh.v5.rcp[index].k[word_off], value);
+ break;
+
+ case HW_HSH_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->hsh.v5.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 5 */
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_hsh_rcp_set(struct flow_api_backend_s *be, enum hw_hsh_e field, uint32_t index,
+ uint32_t word_off, uint32_t value)
+{
+ return hw_mod_hsh_rcp_mod(be, field, index, word_off, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 4737460cdf..068c890b45 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,9 +30,15 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_hsh {
+ struct hw_db_inline_hsh_data data;
+ int ref;
+ } *hsh;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_hsh;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -122,6 +128,21 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
}
+ db->cfn = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cfn));
+
+ if (db->cfn == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_hsh = ndev->be.hsh.nb_rcp;
+ db->hsh = calloc(db->nb_hsh, sizeof(struct hw_db_inline_resource_db_hsh));
+
+ if (db->hsh == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
return 0;
}
@@ -133,6 +154,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->hsh);
+
free(db->cat);
if (db->km) {
@@ -180,6 +203,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_km_ft_deref(ndev, db_handle, *(struct hw_db_km_ft *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_HSH:
+ hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -219,6 +246,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_KM_FT:
return NULL; /* FTs can't be easily looked up */
+ case HW_DB_IDX_TYPE_HSH:
+ return &db->hsh[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -247,6 +277,7 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
{
(void)ft;
(void)qsl_hw_id;
+ (void)ft;
const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
(void)offset;
@@ -848,3 +879,114 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* HSH */
+/******************************************************************************/
+
+static int hw_db_inline_hsh_compare(const struct hw_db_inline_hsh_data *data1,
+ const struct hw_db_inline_hsh_data *data2)
+{
+ for (uint32_t i = 0; i < MAX_RSS_KEY_LEN; ++i)
+ if (data1->key[i] != data2->key[i])
+ return 0;
+
+ return data1->func == data2->func && data1->hash_mask == data2->hash_mask;
+}
+
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_hsh_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_HSH;
+
+ /* check if default hash configuration shall be used, i.e. rss_hf is not set */
+ /*
+ * NOTE: hsh id 0 is reserved for "default"
+ * HSH used by port configuration; All ports share the same default hash settings.
+ */
+ if (data->hash_mask == 0) {
+ idx.ids = 0;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_hsh; ++i) {
+ int ref = db->hsh[i].ref;
+
+ if (ref > 0 && hw_db_inline_hsh_compare(data, &db->hsh[i].data)) {
+ idx.ids = i;
+ hw_db_inline_hsh_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ struct nt_eth_rss_conf tmp_rss_conf;
+
+ tmp_rss_conf.rss_hf = data->hash_mask;
+ memcpy(tmp_rss_conf.rss_key, data->key, MAX_RSS_KEY_LEN);
+ tmp_rss_conf.algorithm = data->func;
+ int res = flow_nic_set_hasher_fields(ndev, idx.ids, tmp_rss_conf);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->hsh[idx.ids].ref = 1;
+ memcpy(&db->hsh[idx.ids].data, data, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, idx.ids);
+
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->hsh[idx.ids].ref += 1;
+}
+
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->hsh[idx.ids].ref -= 1;
+
+ if (db->hsh[idx.ids].ref <= 0) {
+ /*
+ * NOTE: hsh id 0 is reserved for "default" HSH used by
+ * port configuration, so we shall keep it even if
+ * it is not used by any flow
+ */
+ if (idx.ids > 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, idx.ids, 0, 0x0);
+ hw_mod_hsh_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->hsh[idx.ids].data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+ flow_nic_free_resource(ndev, RES_HSH_RCP, idx.ids);
+ }
+
+ db->hsh[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index e104ba7327..c97bdef1b7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -60,6 +60,10 @@ struct hw_db_km_ft {
HW_DB_IDX;
};
+struct hw_db_hsh_idx {
+ HW_DB_IDX;
+};
+
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
HW_DB_IDX_TYPE_COT,
@@ -68,6 +72,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_SLC_LR,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
+ HW_DB_IDX_TYPE_HSH,
};
/* Functionality data types */
@@ -133,6 +138,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_hsh_idx hsh;
};
};
};
@@ -175,6 +181,11 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_hsh_data *data);
+void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
+
/**/
struct hw_db_cat_idx hw_db_inline_cat_add(struct flow_nic_dev *ndev, void *db_handle,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index bf6cbcf37d..8ba100edd7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -25,6 +25,15 @@
#define NT_VIOLATING_MBR_CFN 0
#define NT_VIOLATING_MBR_QSL 1
+#define RTE_ETH_RSS_UDP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)
+
+#define RTE_ETH_RSS_TCP_COMBINED \
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX)
+
+#define NT_FLM_OP_UNLEARN 0
+#define NT_FLM_OP_LEARN 1
+
static void *flm_lrn_queue_arr;
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
@@ -2322,10 +2331,27 @@ static void setup_db_qsl_data(struct nic_flow_def *fd, struct hw_db_inline_qsl_d
}
}
+static void setup_db_hsh_data(struct nic_flow_def *fd, struct hw_db_inline_hsh_data *hsh_data)
+{
+ memset(hsh_data, 0x0, sizeof(struct hw_db_inline_hsh_data));
+
+ hsh_data->func = fd->hsh.func;
+ hsh_data->hash_mask = fd->hsh.types;
+
+ if (fd->hsh.key != NULL) {
+ /*
+ * Just a safeguard. Check and error handling of rss_key_len
+ * shall be done at api layers above.
+ */
+ memcpy(&hsh_data->key, fd->hsh.key,
+ fd->hsh.key_len < MAX_RSS_KEY_LEN ? fd->hsh.key_len : MAX_RSS_KEY_LEN);
+ }
+}
+
static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
- const struct hw_db_inline_hsh_data *hsh_data __rte_unused,
+ const struct hw_db_inline_hsh_data *hsh_data,
uint32_t group __rte_unused,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
@@ -2362,6 +2388,17 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle, hsh_data);
+ local_idxs[(*local_idx_counter)++] = hsh_idx.raw;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup SLC LR */
struct hw_db_slc_lr_idx slc_lr_idx = { .raw = 0 };
@@ -2405,6 +2442,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
if (attr->group > 0 && fd_has_empty_pattern(fd)) {
/*
@@ -2488,6 +2526,19 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup HSH */
+ struct hw_db_hsh_idx hsh_idx =
+ hw_db_inline_hsh_add(dev->ndev, dev->ndev->hw_db_handle,
+ &hsh_data);
+ fh->db_idxs[fh->db_idx_counter++] = hsh_idx.raw;
+ action_set_data.hsh = hsh_idx;
+
+ if (hsh_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference HSH resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
}
/* Setup CAT */
@@ -2667,6 +2718,122 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
return NULL;
}
+/*
+ * FPGA uses up to 10 32-bit words (320 bits) for hash calculation + 8 bits for L4 protocol number.
+ * Hashed data are split between two 128-bit Quad Words (QW)
+ * and two 32-bit Words (W), which can refer to different header parts.
+ */
+enum hsh_words_id {
+ HSH_WORDS_QW0 = 0,
+ HSH_WORDS_QW4,
+ HSH_WORDS_W8,
+ HSH_WORDS_W9,
+ HSH_WORDS_SIZE,
+};
+
+/* struct with details about hash QWs & Ws */
+struct hsh_words {
+ /*
+ * index of W (word) or index of 1st word of QW (quad word)
+ * is used for hash mask calculation
+ */
+ uint8_t index;
+ uint8_t toeplitz_index; /* offset in Bytes of given [Q]W inside Toeplitz RSS key */
+ enum hw_hsh_e pe; /* offset to header part, e.g. beginning of L4 */
+ enum hw_hsh_e ofs; /* relative offset in BYTES to 'pe' header offset above */
+ uint16_t bit_len; /* max length of header part in bits to fit into QW/W */
+ bool free; /* only free words can be used for hsh calculation */
+};
+
+static enum hsh_words_id get_free_word(struct hsh_words *words, uint16_t bit_len)
+{
+ enum hsh_words_id ret = HSH_WORDS_SIZE;
+ uint16_t ret_bit_len = UINT16_MAX;
+
+ for (enum hsh_words_id i = HSH_WORDS_QW0; i < HSH_WORDS_SIZE; i++) {
+ if (words[i].free && bit_len <= words[i].bit_len &&
+ words[i].bit_len < ret_bit_len) {
+ ret = i;
+ ret_bit_len = words[i].bit_len;
+ }
+ }
+
+ return ret;
+}
+
+static int flow_nic_set_hasher_part_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct hsh_words *words, uint32_t pe, uint32_t ofs,
+ int bit_len, bool toeplitz)
+{
+ int res = 0;
+
+ /* check if there is any free word, which can accommodate header part of given 'bit_len' */
+ enum hsh_words_id word = get_free_word(words, bit_len);
+
+ if (word == HSH_WORDS_SIZE) {
+ NT_LOG(ERR, FILTER, "Cannot add additional %d bits into hash", bit_len);
+ return -1;
+ }
+
+ words[word].free = false;
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].pe, hsh_idx, 0, pe);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].pe,
+ hsh_idx, pe);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, words[word].ofs, hsh_idx, 0, ofs);
+ NT_LOG(DBG, FILTER, "hw_mod_hsh_rcp_set(&ndev->be, %d, %d, 0, %d)", words[word].ofs,
+ hsh_idx, ofs);
+
+ /* set HW_HSH_RCP_WORD_MASK based on used QW/W and given 'bit_len' */
+ int mask_bit_len = bit_len;
+ uint32_t mask = 0x0;
+ uint32_t mask_be = 0x0;
+ uint32_t toeplitz_mask[9] = { 0x0 };
+ /* iterate through all words of QW */
+ uint16_t words_count = words[word].bit_len / 32;
+
+ for (uint16_t mask_off = 1; mask_off <= words_count; mask_off++) {
+ if (mask_bit_len >= 32) {
+ mask_bit_len -= 32;
+ mask = 0xffffffff;
+ mask_be = mask;
+
+ } else if (mask_bit_len > 0) {
+ /* keep bits from left to right, i.e. little to big endian */
+ mask_be = 0xffffffff >> (32 - mask_bit_len);
+ mask = mask_be << (32 - mask_bit_len);
+ mask_bit_len = 0;
+
+ } else {
+ mask = 0x0;
+ mask_be = 0x0;
+ }
+
+ /* reorder QW words mask from little to big endian */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, hsh_idx,
+ words[word].index + words_count - mask_off, mask);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_WORD_MASK, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, words[word].index + words_count - mask_off, mask);
+ toeplitz_mask[words[word].toeplitz_index + mask_off - 1] = mask_be;
+ }
+
+ if (toeplitz) {
+ NT_LOG(DBG, FILTER,
+ "Partial Toeplitz RSS key mask: %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32 " %08" PRIX32
+ " %08" PRIX32 "",
+ toeplitz_mask[8], toeplitz_mask[7], toeplitz_mask[6], toeplitz_mask[5],
+ toeplitz_mask[4], toeplitz_mask[3], toeplitz_mask[2], toeplitz_mask[1],
+ toeplitz_mask[0]);
+ NT_LOG(DBG, FILTER,
+ " MSB LSB");
+ }
+
+ return res;
+}
+
/*
* Public functions
*/
@@ -2717,6 +2884,12 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_PDB_RCP, 0);
+ /* Set default hasher recipe to 5-tuple */
+ flow_nic_set_hasher(ndev, 0, HASH_ALGO_5TUPLE);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+
+ flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -2783,6 +2956,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_pdb_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_PDB_RCP, 0);
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, 0, 0, 0);
+ hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
@@ -2980,6 +3157,672 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
+{
+ return (hash_mask & hash_bits) == hash_bits;
+}
+
+static __rte_always_inline void unset_bits(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ *hash_mask &= ~hash_bits;
+}
+
+static __rte_always_inline void unset_bits_and_log(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", *hash_mask & hash_bits) == 0)
+ NT_LOG(DBG, FILTER, "Configured RSS types:%s", rss_buffer);
+
+ unset_bits(hash_mask, hash_bits);
+}
+
+static __rte_always_inline void unset_bits_if_all_enabled(uint64_t *hash_mask, uint64_t hash_bits)
+{
+ if (all_bits_enabled(*hash_mask, hash_bits))
+ unset_bits(hash_mask, hash_bits);
+}
+
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf)
+{
+ uint64_t fields = rss_conf.rss_hf;
+
+ char rss_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(rss_buffer);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", fields) == 0)
+ NT_LOG(DBG, FILTER, "Requested RSS types:%s", rss_buffer);
+
+ /*
+ * configure all (Q)Words usable for hash calculation
+ * Hash can be calculated from 4 independent header parts:
+ * | QW0 | Qw4 | W8| W9|
+ * word | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
+ */
+ struct hsh_words words[HSH_WORDS_SIZE] = {
+ { 0, 5, HW_HSH_RCP_QW0_PE, HW_HSH_RCP_QW0_OFS, 128, true },
+ { 4, 1, HW_HSH_RCP_QW4_PE, HW_HSH_RCP_QW4_OFS, 128, true },
+ { 8, 0, HW_HSH_RCP_W8_PE, HW_HSH_RCP_W8_OFS, 32, true },
+ {
+ 9, 255, HW_HSH_RCP_W9_PE, HW_HSH_RCP_W9_OFS, 32,
+ true
+ }, /* not supported for Toeplitz */
+ };
+
+ int res = 0;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+ /* enable hashing */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_LOAD_DIST_TYPE, hsh_idx, 0, 2);
+
+ /* configure selected hash function and its key */
+ bool toeplitz = false;
+
+ switch (rss_conf.algorithm) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /* Use default NTH10 hashing algorithm */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 0);
+ /* Use 1st 32-bits from rss_key to configure NTH10 SEED */
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_SEED, hsh_idx, 0,
+ rss_conf.rss_key[0] << 24 | rss_conf.rss_key[1] << 16 |
+ rss_conf.rss_key[2] << 8 | rss_conf.rss_key[3]);
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ toeplitz = true;
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TOEPLITZ, hsh_idx, 0, 1);
+ uint8_t empty_key = 0;
+
+ /* Toeplitz key (always 40B) must be encoded from little to big endian */
+ for (uint8_t i = 0; i <= (MAX_RSS_KEY_LEN - 8); i += 8) {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 |
+ rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 |
+ rss_conf.rss_key[i + 7]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4,
+ rss_conf.rss_key[i + 4] << 24 | rss_conf.rss_key[i + 5] << 16 |
+ rss_conf.rss_key[i + 6] << 8 | rss_conf.rss_key[i + 7]);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 |
+ rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 |
+ rss_conf.rss_key[i + 3]);
+ NT_LOG(DBG, FILTER,
+ "hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_K, %d, %d, 0x%" PRIX32
+ ")",
+ hsh_idx, i / 4 + 1,
+ rss_conf.rss_key[i] << 24 | rss_conf.rss_key[i + 1] << 16 |
+ rss_conf.rss_key[i + 2] << 8 | rss_conf.rss_key[i + 3]);
+ empty_key |= rss_conf.rss_key[i] | rss_conf.rss_key[i + 1] |
+ rss_conf.rss_key[i + 2] | rss_conf.rss_key[i + 3] |
+ rss_conf.rss_key[i + 4] | rss_conf.rss_key[i + 5] |
+ rss_conf.rss_key[i + 6] | rss_conf.rss_key[i + 7];
+ }
+
+ if (empty_key == 0) {
+ NT_LOG(ERR, FILTER,
+ "Toeplitz key must be configured. Key with all bytes set to zero is not allowed.");
+ return -1;
+ }
+
+ words[HSH_WORDS_W9].free = false;
+ NT_LOG(DBG, FILTER,
+ "Toeplitz hashing is enabled thus W9 and P_MASK cannot be used.");
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "Unknown hashing function %d requested", rss_conf.algorithm);
+ return -1;
+ }
+
+ /* indication that some IPv6 flag is present */
+ bool ipv6 = fields & (NT_ETH_RSS_IPV6_MASK);
+ /* store proto mask for later use at IP and L4 checksum handling */
+ uint64_t l4_proto_mask = fields &
+ (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+ RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX);
+
+ /* outermost headers are used by default, so innermost bit takes precedence if detected */
+ bool outer = (fields & RTE_ETH_RSS_LEVEL_INNERMOST) ? false : true;
+ unset_bits(&fields, RTE_ETH_RSS_LEVEL_MASK);
+
+ if (fields == 0) {
+ NT_LOG(ERR, FILTER, "RSS hash configuration 0x%" PRIX64 " is not valid.",
+ rss_conf.rss_hf);
+ return -1;
+ }
+
+ /* indication that IPv4 `protocol` or IPv6 `next header` fields shall be part of the hash
+ */
+ bool l4_proto_hash = false;
+
+ /*
+ * check if SRC_ONLY & DST_ONLY are used simultaneously;
+ * According to DPDK, we shall behave like none of these bits is set
+ */
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ unset_bits_if_all_enabled(&fields, RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ /* L2 */
+ if (fields & (RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 6, 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L2, 0, 96, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L2_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner src MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 6,
+ 48, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L2_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 48, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner src & dst MAC hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L2, 0,
+ 96, toeplitz);
+ }
+
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_ETH | RTE_ETH_RSS_L2_SRC_ONLY |
+ RTE_ETH_RSS_L2_DST_ONLY);
+ }
+
+ /*
+ * VLAN support of multiple VLAN headers,
+ * where S-VLAN is the first and C-VLAN the last VLAN header
+ */
+ if (fields & RTE_ETH_RSS_C_VLAN) {
+ /*
+ * use MPLS protocol offset, which points just after ethertype with relative
+ * offset -6 (i.e. 2 bytes
+ * of ethertype & size + 4 bytes of VLAN header field) to access last vlan header
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, -6,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner C-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ -6, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_C_VLAN);
+ }
+
+ if (fields & RTE_ETH_RSS_S_VLAN) {
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_FIRST_VLAN, 0, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner S-VLAN hasher.");
+ /*
+ * use whole 32-bit 802.1a tag - backward compatible
+ * with VSWITCH implementation
+ */
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_VLAN,
+ 0, 32, toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_S_VLAN);
+ }
+ /* L2 payload */
+ /* calculate hash of 128-bits of l2 payload; Use MPLS protocol offset to address the
+ * beginning of L2 payload even if MPLS header is not present
+ */
+ if (fields & RTE_ETH_RSS_L2_PAYLOAD) {
+ uint64_t outer_fields_enabled = 0;
+
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_MPLS, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L2 payload hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_MPLS,
+ 0, 128, toeplitz);
+ outer_fields_enabled = fields & RTE_ETH_RSS_GTPU;
+ }
+
+ /*
+ * L2 PAYLOAD hashing overrides all L3 & L4 RSS flags.
+ * Thus we can clear all remaining (supported)
+ * RSS flags...
+ */
+ unset_bits_and_log(&fields, NT_ETH_RSS_OFFLOAD_MASK);
+ /*
+ * ...but in case of INNER L2 PAYLOAD we must process
+ * "always outer" GTPU field if enabled
+ */
+ fields |= outer_fields_enabled;
+ }
+
+ /* L3 + L4 protocol number */
+ if (fields & RTE_ETH_RSS_IPV4_CHKSUM) {
+ /* only IPv4 checksum is supported by DPDK RTE_ETH_RSS_* types */
+ if (ipv6) {
+ NT_LOG(ERR, FILTER,
+ "RSS: IPv4 checksum requested with IPv6 header hashing!");
+ res = 1;
+
+ } else if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L3, 10,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L3,
+ 10, 16, toeplitz);
+ }
+
+ /*
+ * L3 checksum is made from whole L3 header, i.e. no need to process other
+ * L3 hashing flags
+ */
+ unset_bits_and_log(&fields, RTE_ETH_RSS_IPV4_CHKSUM | NT_ETH_RSS_IP_MASK);
+ }
+
+ if (fields & NT_ETH_RSS_IP_MASK) {
+ if (ipv6) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST,
+ -16, 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv6/IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, -16,
+ 128, toeplitz);
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_FINAL_IP_DST, 0,
+ 128, toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & (RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6)) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv6/IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 32, toeplitz);
+ }
+ }
+
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_AUTO_IPV4_MASK, hsh_idx, 0,
+ 1);
+
+ } else {
+ /* IPv4 */
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 32, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 16,
+ 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 12,
+ 64, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L3_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 32,
+ toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L3_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 dst only hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 16, 32,
+ toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner IPv4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L3, 12, 64,
+ toeplitz);
+ }
+
+ /* check if fragment ID shall be part of hash */
+ if (fields & RTE_ETH_RSS_FRAG_IPV4) {
+ if (outer) {
+ NT_LOG(DBG, FILTER,
+ "Set outer IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_ID_IPV4_6, 0,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER,
+ "Set inner IPv4 fragment ID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words,
+ DYN_TUN_ID_IPV4_6,
+ 0, 16, toeplitz);
+ }
+ }
+ }
+
+ /* check if L4 protocol type shall be part of hash */
+ if (l4_proto_mask)
+ l4_proto_hash = true;
+
+ unset_bits_and_log(&fields, NT_ETH_RSS_IP_MASK);
+ }
+
+ /* L4 */
+ if (fields & (RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+ if (outer) {
+ if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set outer L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 2, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set outer L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 0, 32, toeplitz);
+ }
+
+ } else if (fields & RTE_ETH_RSS_L4_SRC_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 src hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 16, toeplitz);
+
+ } else if (fields & RTE_ETH_RSS_L4_DST_ONLY) {
+ NT_LOG(DBG, FILTER, "Set inner L4 dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 2,
+ 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 src & dst hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_TUN_L4, 0,
+ 32, toeplitz);
+ }
+
+ l4_proto_hash = true;
+ unset_bits_and_log(&fields,
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY |
+ RTE_ETH_RSS_L4_DST_ONLY);
+ }
+
+ /* IPv4 protocol / IPv6 next header fields */
+ if (l4_proto_hash) {
+ /* NOTE: HW_HSH_RCP_P_MASK is not supported for Toeplitz and thus one of SW0, SW4
+ * or W8 must be used to hash on `protocol` field of IPv4 or `next header` field of
+ * IPv6 header.
+ */
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 6, 8,
+ toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_L3, 9, 8,
+ toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 0);
+ }
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner L4 protocol type / next header hasher.");
+
+ if (toeplitz) {
+ if (ipv6) {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 6, 8, toeplitz);
+
+ } else {
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx,
+ words, DYN_TUN_L3,
+ 9, 8, toeplitz);
+ }
+
+ } else {
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_P_MASK, hsh_idx, 0,
+ 1);
+ res |= hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_TNL_P, hsh_idx, 0,
+ 1);
+ }
+ }
+
+ l4_proto_hash = false;
+ }
+
+ /*
+ * GTPU - for UPF use cases we always use TEID from outermost GTPU header
+ * even if other headers are innermost
+ */
+ if (fields & RTE_ETH_RSS_GTPU) {
+ NT_LOG(DBG, FILTER, "Set outer GTPU TEID hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words, DYN_L4_PAYLOAD, 4, 32,
+ toeplitz);
+ unset_bits_and_log(&fields, RTE_ETH_RSS_GTPU);
+ }
+
+ /* Checksums */
+ /* only UDP, TCP and SCTP checksums are supported */
+ if (fields & RTE_ETH_RSS_L4_CHKSUM) {
+ switch (l4_proto_mask) {
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX:
+ case RTE_ETH_RSS_UDP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 6, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner UDP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 6, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX:
+ case RTE_ETH_RSS_TCP_COMBINED:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 16, 16, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner TCP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 16, 16,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ case RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
+ if (outer) {
+ NT_LOG(DBG, FILTER, "Set outer SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_L4, 8, 32, toeplitz);
+
+ } else {
+ NT_LOG(DBG, FILTER, "Set inner SCTP checksum hasher.");
+ res |= flow_nic_set_hasher_part_inline(ndev, hsh_idx, words,
+ DYN_TUN_L4, 8, 32,
+ toeplitz);
+ }
+
+ unset_bits_and_log(&fields, RTE_ETH_RSS_L4_CHKSUM | l4_proto_mask);
+ break;
+
+ case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+ case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+
+ /* none or unsupported protocol was chosen */
+ case 0:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing is supported only for UDP, TCP and SCTP protocols");
+ res = -1;
+ break;
+
+ /* multiple L4 protocols were selected */
+ default:
+ NT_LOG(ERR, FILTER,
+ "L4 checksum hashing can be enabled just for one of UDP, TCP or SCTP protocols");
+ res = -1;
+ break;
+ }
+ }
+
+ if (fields || res != 0) {
+ hw_mod_hsh_rcp_set(&ndev->be, HW_HSH_RCP_PRESET_ALL, hsh_idx, 0, 0);
+
+ if (sprint_nt_rss_mask(rss_buffer, rss_buffer_len, " ", rss_conf.rss_hf) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration%s is not supported for hash func %s.",
+ rss_buffer,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+
+ } else {
+ NT_LOG(ERR, FILTER,
+ "RSS configuration 0x%" PRIX64
+ " is not supported for hash func %s.",
+ rss_conf.rss_hf,
+ (enum rte_eth_hash_function)toeplitz ? "Toeplitz" : "NTH10");
+ }
+
+ return -1;
+ }
+
+ return res;
+}
+
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -2993,6 +3836,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b87f8542ac..e623bb2352 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,4 +38,8 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 149c549112..1069be2f85 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -252,6 +252,10 @@ struct profile_inline_ops {
int (*flow_destroy_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
+ int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 32/80] net/ntnic: add TPE module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (30 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 31/80] net/ntnic: add hash API Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 33/80] net/ntnic: add FLM module Serhii Iliushyk
` (47 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The TX Packet Editor is a software abstraction module,
that keeps track of the handful of FPGA modules
that are used to edit packets in the TX pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 16 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 757 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 373 +++++++++
.../profile_inline/flow_api_hw_db_inline.h | 70 ++
.../profile_inline/flow_api_profile_inline.c | 127 ++-
5 files changed, 1342 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index cee148807a..e16dcd478f 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -889,24 +889,40 @@ void hw_mod_tpe_free(struct flow_api_backend_s *be);
int hw_mod_tpe_reset(struct flow_api_backend_s *be);
int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value);
int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
enum debug_mode_e {
FLOW_BACKEND_DEBUG_MODE_NONE = 0x0000,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index 0d73b795d5..ba8f2d0dbb 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -169,6 +169,82 @@ int hw_mod_tpe_rpp_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpp_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpp_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpp_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpp_rcp, struct tpe_v1_rpp_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPP_RCP_EXP:
+ GET_SET(be->tpe.v3.rpp_rcp[index].exp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* IFR_RCP
*/
@@ -203,6 +279,90 @@ int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ins_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ins_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.ins_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_ins_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.ins_rcp, struct tpe_v1_ins_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_INS_RCP_DYN:
+ GET_SET(be->tpe.v3.ins_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_INS_RCP_OFS:
+ GET_SET(be->tpe.v3.ins_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_INS_RCP_LEN:
+ GET_SET(be->tpe.v3.ins_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ins_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RCP
*/
@@ -220,6 +380,102 @@ int hw_mod_tpe_rpl_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v3_rpl_v4_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rcp, struct tpe_v3_rpl_v4_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RCP_DYN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_RPL_RCP_OFS:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_RPL_RCP_LEN:
+ GET_SET(be->tpe.v3.rpl_rcp[index].len, value);
+ break;
+
+ case HW_TPE_RPL_RCP_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_RCP_EXT_PRIO:
+ GET_SET(be->tpe.v3.rpl_rcp[index].ext_prio, value);
+ break;
+
+ case HW_TPE_RPL_RCP_ETH_TYPE_WR:
+ GET_SET(be->tpe.v3.rpl_rcp[index].eth_type_wr, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_EXT
*/
@@ -237,6 +493,86 @@ int hw_mod_tpe_rpl_ext_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_ext_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_ext_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_ext[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_ext_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value, be->tpe.nb_rpl_ext_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_ext_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_ext, struct tpe_v1_rpl_v2_ext_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_EXT_RPL_PTR:
+ GET_SET(be->tpe.v3.rpl_ext[index].rpl_ptr, value);
+ break;
+
+ case HW_TPE_RPL_EXT_META_RPL_LEN:
+ GET_SET(be->tpe.v3.rpl_ext[index].meta_rpl_len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_ext_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpl_ext_mod(be, field, index, &value, 0);
+}
+
/*
* RPL_RPL
*/
@@ -254,6 +590,89 @@ int hw_mod_tpe_rpl_rpl_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_rpl_rpl_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpl_rpl_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.rpl_rpl[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_rpl_v2_rpl_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value, be->tpe.nb_rpl_depth);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rpl_depth) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.rpl_rpl, struct tpe_v1_rpl_v2_rpl_s, index,
+ *value);
+ break;
+
+ case HW_TPE_RPL_RPL_VALUE:
+ if (get)
+ memcpy(value, be->tpe.v3.rpl_rpl[index].value,
+ sizeof(uint32_t) * 4);
+
+ else
+ memcpy(be->tpe.v3.rpl_rpl[index].value, value,
+ sizeof(uint32_t) * 4);
+
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpl_rpl_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_tpe_rpl_rpl_mod(be, field, index, value, 0);
+}
+
/*
* CPY_RCP
*/
@@ -273,6 +692,96 @@ int hw_mod_tpe_cpy_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_cpy_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_cpy_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ const uint32_t cpy_size = be->tpe.nb_cpy_writers * be->tpe.nb_rcp_categories;
+
+ if (index >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.cpy_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_cpy_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value, cpy_size);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= cpy_size) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.cpy_rcp, struct tpe_v1_cpy_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CPY_RCP_READER_SELECT:
+ GET_SET(be->tpe.v3.cpy_rcp[index].reader_select, value);
+ break;
+
+ case HW_TPE_CPY_RCP_DYN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].dyn, value);
+ break;
+
+ case HW_TPE_CPY_RCP_OFS:
+ GET_SET(be->tpe.v3.cpy_rcp[index].ofs, value);
+ break;
+
+ case HW_TPE_CPY_RCP_LEN:
+ GET_SET(be->tpe.v3.cpy_rcp[index].len, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_cpy_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_cpy_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* HFU_RCP
*/
@@ -290,6 +799,166 @@ int hw_mod_tpe_hfu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_hfu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_hfu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.hfu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_hfu_v1_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.hfu_rcp, struct tpe_v1_hfu_v1_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_outer_l4_len, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_A_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_a_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_B_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_b_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_pos_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_ADD_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_add_ofs, value);
+ break;
+
+ case HW_TPE_HFU_RCP_LEN_C_SUB_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].len_c_sub_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_WR:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_wr, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_DYN:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_dyn, value);
+ break;
+
+ case HW_TPE_HFU_RCP_TTL_POS_OFS:
+ GET_SET(be->tpe.v3.hfu_rcp[index].ttl_pos_ofs, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_hfu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_hfu_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* CSU_RCP
*/
@@ -306,3 +975,91 @@ int hw_mod_tpe_csu_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_csu_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+
+static int hw_mod_tpe_csu_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->tpe.v3.csu_rcp[index], (uint8_t)*value,
+ sizeof(struct tpe_v1_csu_v0_rcp_s));
+ break;
+
+ case HW_TPE_FIND:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ FIND_EQUAL_INDEX(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value, be->tpe.nb_rcp_categories);
+ break;
+
+ case HW_TPE_COMPARE:
+ if (!get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ if (*value >= be->tpe.nb_rcp_categories) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ DO_COMPARE_INDEXS(be->tpe.v3.csu_rcp, struct tpe_v1_csu_v0_rcp_s, index,
+ *value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_OUTER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].ol4_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L3_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il3_cmd, value);
+ break;
+
+ case HW_TPE_CSU_RCP_INNER_L4_CMD:
+ GET_SET(be->tpe.v3.csu_rcp[index].il4_cmd, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_csu_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_csu_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 068c890b45..dec96fce85 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -30,6 +30,17 @@ struct hw_db_inline_resource_db {
int ref;
} *slc_lr;
+ struct hw_db_inline_resource_db_tpe {
+ struct hw_db_inline_tpe_data data;
+ int ref;
+ } *tpe;
+
+ struct hw_db_inline_resource_db_tpe_ext {
+ struct hw_db_inline_tpe_ext_data data;
+ int replace_ram_idx;
+ int ref;
+ } *tpe_ext;
+
struct hw_db_inline_resource_db_hsh {
struct hw_db_inline_hsh_data data;
int ref;
@@ -38,6 +49,8 @@ struct hw_db_inline_resource_db {
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
+ uint32_t nb_tpe;
+ uint32_t nb_tpe_ext;
uint32_t nb_hsh;
/* Items */
@@ -101,6 +114,22 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_tpe = ndev->be.tpe.nb_rcp_categories;
+ db->tpe = calloc(db->nb_tpe, sizeof(struct hw_db_inline_resource_db_tpe));
+
+ if (db->tpe == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->nb_tpe_ext = ndev->be.tpe.nb_rpl_ext_categories;
+ db->tpe_ext = calloc(db->nb_tpe_ext, sizeof(struct hw_db_inline_resource_db_tpe_ext));
+
+ if (db->tpe_ext == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
db->nb_cat = ndev->be.cat.nb_cat_funcs;
db->cat = calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_cat));
@@ -154,6 +183,8 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cot);
free(db->qsl);
free(db->slc_lr);
+ free(db->tpe);
+ free(db->tpe_ext);
free(db->hsh);
free(db->cat);
@@ -195,6 +226,15 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_slc_lr_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_TPE:
+ hw_db_inline_tpe_deref(ndev, db_handle, *(struct hw_db_tpe_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ hw_db_inline_tpe_ext_deref(ndev, db_handle,
+ *(struct hw_db_tpe_ext_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -240,6 +280,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_SLC_LR:
return &db->slc_lr[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_TPE:
+ return &db->tpe[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_TPE_EXT:
+ return &db->tpe_ext[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -652,6 +698,333 @@ void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
}
}
+/******************************************************************************/
+/* TPE */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_compare(const struct hw_db_inline_tpe_data *data1,
+ const struct hw_db_inline_tpe_data *data2)
+{
+ for (int i = 0; i < 6; ++i)
+ if (data1->writer[i].en != data2->writer[i].en ||
+ data1->writer[i].reader_select != data2->writer[i].reader_select ||
+ data1->writer[i].dyn != data2->writer[i].dyn ||
+ data1->writer[i].ofs != data2->writer[i].ofs ||
+ data1->writer[i].len != data2->writer[i].len)
+ return 0;
+
+ return data1->insert_len == data2->insert_len && data1->new_outer == data2->new_outer &&
+ data1->calc_eth_type_from_inner_ip == data2->calc_eth_type_from_inner_ip &&
+ data1->ttl_en == data2->ttl_en && data1->ttl_dyn == data2->ttl_dyn &&
+ data1->ttl_ofs == data2->ttl_ofs && data1->len_a_en == data2->len_a_en &&
+ data1->len_a_pos_dyn == data2->len_a_pos_dyn &&
+ data1->len_a_pos_ofs == data2->len_a_pos_ofs &&
+ data1->len_a_add_dyn == data2->len_a_add_dyn &&
+ data1->len_a_add_ofs == data2->len_a_add_ofs &&
+ data1->len_a_sub_dyn == data2->len_a_sub_dyn &&
+ data1->len_b_en == data2->len_b_en &&
+ data1->len_b_pos_dyn == data2->len_b_pos_dyn &&
+ data1->len_b_pos_ofs == data2->len_b_pos_ofs &&
+ data1->len_b_add_dyn == data2->len_b_add_dyn &&
+ data1->len_b_add_ofs == data2->len_b_add_ofs &&
+ data1->len_b_sub_dyn == data2->len_b_sub_dyn &&
+ data1->len_c_en == data2->len_c_en &&
+ data1->len_c_pos_dyn == data2->len_c_pos_dyn &&
+ data1->len_c_pos_ofs == data2->len_c_pos_ofs &&
+ data1->len_c_add_dyn == data2->len_c_add_dyn &&
+ data1->len_c_add_ofs == data2->len_c_add_ofs &&
+ data1->len_c_sub_dyn == data2->len_c_sub_dyn;
+}
+
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE;
+
+ for (uint32_t i = 1; i < db->nb_tpe; ++i) {
+ int ref = db->tpe[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_compare(data, &db->tpe[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe[idx.ids].ref = 1;
+ memcpy(&db->tpe[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_data));
+
+ if (data->insert_len > 0) {
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_RPP_RCP_EXP, idx.ids, data->insert_len);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_INS_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_DYN, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_OFS, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_LEN, idx.ids, data->insert_len);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_RPL_PTR, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_EXT_PRIO, idx.ids, 1);
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_RPL_RCP_ETH_TYPE_WR, idx.ids,
+ data->calc_eth_type_from_inner_ip);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+ }
+
+ for (uint32_t i = 0; i < 6; ++i) {
+ if (data->writer[i].en) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i,
+ data->writer[i].reader_select);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, data->writer[i].dyn);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, data->writer[i].ofs);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, data->writer[i].len);
+
+ } else {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_READER_SELECT,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_DYN,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_OFS,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_CPY_RCP_LEN,
+ idx.ids + db->nb_tpe * i, 0);
+ }
+
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_WR, idx.ids, data->len_a_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_OUTER_L4_LEN, idx.ids,
+ data->new_outer);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_DYN, idx.ids,
+ data->len_a_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_POS_OFS, idx.ids,
+ data->len_a_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_DYN, idx.ids,
+ data->len_a_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_ADD_OFS, idx.ids,
+ data->len_a_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_A_SUB_DYN, idx.ids,
+ data->len_a_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_WR, idx.ids, data->len_b_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_DYN, idx.ids,
+ data->len_b_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_POS_OFS, idx.ids,
+ data->len_b_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_DYN, idx.ids,
+ data->len_b_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_ADD_OFS, idx.ids,
+ data->len_b_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_B_SUB_DYN, idx.ids,
+ data->len_b_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_WR, idx.ids, data->len_c_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_DYN, idx.ids,
+ data->len_c_pos_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_POS_OFS, idx.ids,
+ data->len_c_pos_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_DYN, idx.ids,
+ data->len_c_add_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_ADD_OFS, idx.ids,
+ data->len_c_add_ofs);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_LEN_C_SUB_DYN, idx.ids,
+ data->len_c_sub_dyn);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_WR, idx.ids, data->ttl_en);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_DYN, idx.ids, data->ttl_dyn);
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_HFU_RCP_TTL_POS_OFS, idx.ids, data->ttl_ofs);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_OUTER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L3_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_CSU_RCP_INNER_L4_CMD, idx.ids, 3);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe[idx.ids].ref -= 1;
+
+ if (db->tpe[idx.ids].ref <= 0) {
+ for (uint32_t i = 0; i < 6; ++i) {
+ hw_mod_tpe_cpy_rcp_set(&ndev->be, HW_TPE_PRESET_ALL,
+ idx.ids + db->nb_tpe * i, 0);
+ hw_mod_tpe_cpy_rcp_flush(&ndev->be, idx.ids + db->nb_tpe * i, 1);
+ }
+
+ hw_mod_tpe_rpp_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpp_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_ins_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_ins_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_rpl_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_hfu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_hfu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ hw_mod_tpe_csu_rcp_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_csu_rcp_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->tpe[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_data));
+ db->tpe[idx.ids].ref = 0;
+ }
+}
+
+/******************************************************************************/
+/* TPE_EXT */
+/******************************************************************************/
+
+static int hw_db_inline_tpe_ext_compare(const struct hw_db_inline_tpe_ext_data *data1,
+ const struct hw_db_inline_tpe_ext_data *data2)
+{
+ return data1->size == data2->size &&
+ memcmp(data1->hdr8, data2->hdr8, HW_DB_INLINE_MAX_ENCAP_SIZE) == 0;
+}
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_tpe_ext_idx idx = { .raw = 0 };
+ int rpl_rpl_length = ((int)data->size + 15) / 16;
+ int found = 0, rpl_rpl_index = 0;
+
+ idx.type = HW_DB_IDX_TYPE_TPE_EXT;
+
+ if (data->size > HW_DB_INLINE_MAX_ENCAP_SIZE) {
+ idx.error = 1;
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_tpe_ext; ++i) {
+ int ref = db->tpe_ext[i].ref;
+
+ if (ref > 0 && hw_db_inline_tpe_ext_compare(data, &db->tpe_ext[i].data)) {
+ idx.ids = i;
+ hw_db_inline_tpe_ext_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ rpl_rpl_index = flow_nic_alloc_resource_config(ndev, RES_TPE_RPL, rpl_rpl_length, 1);
+
+ if (rpl_rpl_index < 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->tpe_ext[idx.ids].ref = 1;
+ db->tpe_ext[idx.ids].replace_ram_idx = rpl_rpl_index;
+ memcpy(&db->tpe_ext[idx.ids].data, data, sizeof(struct hw_db_inline_tpe_ext_data));
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_RPL_PTR, idx.ids, rpl_rpl_index);
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_RPL_EXT_META_RPL_LEN, idx.ids, data->size);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_data[4];
+ memcpy(rpl_data, data->hdr32 + i * 4, sizeof(rpl_data));
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_data);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ return idx;
+}
+
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->tpe_ext[idx.ids].ref += 1;
+}
+
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->tpe_ext[idx.ids].ref -= 1;
+
+ if (db->tpe_ext[idx.ids].ref <= 0) {
+ const int rpl_rpl_length = ((int)db->tpe_ext[idx.ids].data.size + 15) / 16;
+ const int rpl_rpl_index = db->tpe_ext[idx.ids].replace_ram_idx;
+
+ hw_mod_tpe_rpl_ext_set(&ndev->be, HW_TPE_PRESET_ALL, idx.ids, 0);
+ hw_mod_tpe_rpl_ext_flush(&ndev->be, idx.ids, 1);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ uint32_t rpl_zero[] = { 0, 0, 0, 0 };
+ hw_mod_tpe_rpl_rpl_set(&ndev->be, HW_TPE_RPL_RPL_VALUE, rpl_rpl_index + i,
+ rpl_zero);
+ flow_nic_free_resource(ndev, RES_TPE_RPL, rpl_rpl_index + i);
+ }
+
+ hw_mod_tpe_rpl_rpl_flush(&ndev->be, rpl_rpl_index, rpl_rpl_length);
+
+ memset(&db->tpe_ext[idx.ids].data, 0x0, sizeof(struct hw_db_inline_tpe_ext_data));
+ db->tpe_ext[idx.ids].ref = 0;
+ }
+}
+
+
/******************************************************************************/
/* CAT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c97bdef1b7..18d959307e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -52,6 +52,60 @@ struct hw_db_slc_lr_idx {
HW_DB_IDX;
};
+struct hw_db_inline_tpe_data {
+ uint32_t insert_len : 16;
+ uint32_t new_outer : 1;
+ uint32_t calc_eth_type_from_inner_ip : 1;
+ uint32_t ttl_en : 1;
+ uint32_t ttl_dyn : 5;
+ uint32_t ttl_ofs : 8;
+
+ struct {
+ uint32_t en : 1;
+ uint32_t reader_select : 3;
+ uint32_t dyn : 5;
+ uint32_t ofs : 14;
+ uint32_t len : 5;
+ uint32_t padding : 4;
+ } writer[6];
+
+ uint32_t len_a_en : 1;
+ uint32_t len_a_pos_dyn : 5;
+ uint32_t len_a_pos_ofs : 8;
+ uint32_t len_a_add_dyn : 5;
+ uint32_t len_a_add_ofs : 8;
+ uint32_t len_a_sub_dyn : 5;
+
+ uint32_t len_b_en : 1;
+ uint32_t len_b_pos_dyn : 5;
+ uint32_t len_b_pos_ofs : 8;
+ uint32_t len_b_add_dyn : 5;
+ uint32_t len_b_add_ofs : 8;
+ uint32_t len_b_sub_dyn : 5;
+
+ uint32_t len_c_en : 1;
+ uint32_t len_c_pos_dyn : 5;
+ uint32_t len_c_pos_ofs : 8;
+ uint32_t len_c_add_dyn : 5;
+ uint32_t len_c_add_ofs : 8;
+ uint32_t len_c_sub_dyn : 5;
+};
+
+struct hw_db_inline_tpe_ext_data {
+ uint32_t size;
+ union {
+ uint8_t hdr8[HW_DB_INLINE_MAX_ENCAP_SIZE];
+ uint32_t hdr32[(HW_DB_INLINE_MAX_ENCAP_SIZE + 3) / 4];
+ };
+};
+
+struct hw_db_tpe_idx {
+ HW_DB_IDX;
+};
+struct hw_db_tpe_ext_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -70,6 +124,9 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
HW_DB_IDX_TYPE_SLC_LR,
+ HW_DB_IDX_TYPE_TPE,
+ HW_DB_IDX_TYPE_TPE_EXT,
+
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
@@ -138,6 +195,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
};
@@ -181,6 +239,18 @@ void hw_db_inline_slc_lr_ref(struct flow_nic_dev *ndev, void *db_handle,
void hw_db_inline_slc_lr_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_slc_lr_idx idx);
+struct hw_db_tpe_idx hw_db_inline_tpe_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_data *data);
+void hw_db_inline_tpe_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+void hw_db_inline_tpe_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_tpe_idx idx);
+
+struct hw_db_tpe_ext_idx hw_db_inline_tpe_ext_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_tpe_ext_data *data);
+void hw_db_inline_tpe_ext_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+void hw_db_inline_tpe_ext_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_tpe_ext_idx idx);
+
struct hw_db_hsh_idx hw_db_inline_hsh_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_hsh_data *data);
void hw_db_inline_hsh_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_hsh_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 8ba100edd7..07801b42ff 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -18,6 +18,8 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
@@ -2419,6 +2421,92 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
}
}
+ /* Setup TPE EXT */
+ if (fd->tun_hdr.len > 0) {
+ assert(fd->tun_hdr.len <= HW_DB_INLINE_MAX_ENCAP_SIZE);
+
+ struct hw_db_inline_tpe_ext_data tpe_ext_data = {
+ .size = fd->tun_hdr.len,
+ };
+
+ memset(tpe_ext_data.hdr8, 0x0, HW_DB_INLINE_MAX_ENCAP_SIZE);
+ memcpy(tpe_ext_data.hdr8, fd->tun_hdr.d.hdr8, (fd->tun_hdr.len + 15) & ~15);
+
+ struct hw_db_tpe_ext_idx tpe_ext_idx =
+ hw_db_inline_tpe_ext_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_ext_data);
+ local_idxs[(*local_idx_counter)++] = tpe_ext_idx.raw;
+
+ if (tpe_ext_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE EXT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_rpl_ext_ptr)
+ *flm_rpl_ext_ptr = tpe_ext_idx.ids;
+ }
+
+ /* Setup TPE */
+ assert(fd->modify_field_count <= 6);
+
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip =
+ !fd->tun_hdr.new_outer && fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ tpe_data.writer[i].en = 1;
+ tpe_data.writer[i].reader_select = fd->modify_field[i].select;
+ tpe_data.writer[i].dyn = fd->modify_field[i].dyn;
+ tpe_data.writer[i].ofs = fd->modify_field[i].ofs;
+ tpe_data.writer[i].len = fd->modify_field[i].len;
+ }
+
+ if (fd->tun_hdr.new_outer) {
+ const int fcs_length = 4;
+
+ /* L4 length */
+ tpe_data.len_a_en = 1;
+ tpe_data.len_a_pos_dyn = DYN_L4;
+ tpe_data.len_a_pos_ofs = 4;
+ tpe_data.len_a_add_dyn = 18;
+ tpe_data.len_a_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_a_sub_dyn = DYN_L4;
+
+ /* L3 length */
+ tpe_data.len_b_en = 1;
+ tpe_data.len_b_pos_dyn = DYN_L3;
+ tpe_data.len_b_pos_ofs = fd->tun_hdr.ip_version == 4 ? 2 : 4;
+ tpe_data.len_b_add_dyn = 18;
+ tpe_data.len_b_add_ofs = (uint32_t)(-fcs_length) & 0xff;
+ tpe_data.len_b_sub_dyn = DYN_L3;
+
+ /* GTP length */
+ tpe_data.len_c_en = 1;
+ tpe_data.len_c_pos_dyn = DYN_L4_PAYLOAD;
+ tpe_data.len_c_pos_ofs = 2;
+ tpe_data.len_c_add_dyn = 18;
+ tpe_data.len_c_add_ofs = (uint32_t)(-8 - fcs_length) & 0xff;
+ tpe_data.len_c_sub_dyn = DYN_L4_PAYLOAD;
+ }
+
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle, &tpe_data);
+
+ local_idxs[(*local_idx_counter)++] = tpe_idx.raw;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
return 0;
}
@@ -2539,6 +2627,30 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
goto error_out;
}
+
+ /* Setup TPE */
+ if (fd->ttl_sub_enable) {
+ struct hw_db_inline_tpe_data tpe_data = {
+ .insert_len = fd->tun_hdr.len,
+ .new_outer = fd->tun_hdr.new_outer,
+ .calc_eth_type_from_inner_ip = !fd->tun_hdr.new_outer &&
+ fd->header_strip_end_dyn == DYN_TUN_L3,
+ .ttl_en = fd->ttl_sub_enable,
+ .ttl_dyn = fd->ttl_sub_outer ? DYN_L3 : DYN_TUN_L3,
+ .ttl_ofs = fd->ttl_sub_ipv4 ? 8 : 7,
+ };
+ struct hw_db_tpe_idx tpe_idx =
+ hw_db_inline_tpe_add(dev->ndev, dev->ndev->hw_db_handle,
+ &tpe_data);
+ fh->db_idxs[fh->db_idx_counter++] = tpe_idx.raw;
+ action_set_data.tpe = tpe_idx;
+
+ if (tpe_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference TPE resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+ }
}
/* Setup CAT */
@@ -2843,6 +2955,16 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (!ndev->flow_mgnt_prepared) {
/* Check static arrays are big enough */
assert(ndev->be.tpe.nb_cpy_writers <= MAX_CPY_WRITERS_SUPPORTED);
+ /* KM Flow Type 0 is reserved */
+ flow_nic_mark_resource_used(ndev, RES_KM_FLOW_TYPE, 0);
+ flow_nic_mark_resource_used(ndev, RES_KM_CATEGORY, 0);
+
+ /* Reserved FLM Flow Types */
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_MISS_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE, NT_FLM_UNHANDLED_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_FLOW_TYPE,
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE);
+ flow_nic_mark_resource_used(ndev, RES_FLM_RCP, 0);
/* COT is locked to CFN. Don't set color for CFN 0 */
hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, 0, 0);
@@ -2868,8 +2990,11 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_QSL_QST, 0);
- /* SLC LR index 0 is reserved */
+ /* SLC LR & TPE index 0 were reserved */
flow_nic_mark_resource_used(ndev, RES_SLC_LR_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RCP, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_EXT, 0);
+ flow_nic_mark_resource_used(ndev, RES_TPE_RPL, 0);
/* PDB setup Direct Virtio Scatter-Gather descriptor of 12 bytes for its recipe 0
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 33/80] net/ntnic: add FLM module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (31 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 32/80] net/ntnic: add TPE module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 34/80] net/ntnic: add FLM RCP module Serhii Iliushyk
` (46 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 42 +++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c | 190 +++++++++++++
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 257 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 234 ++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 33 +++
.../profile_inline/flow_api_profile_inline.c | 224 ++++++++++++++-
.../flow_api_profile_inline_config.h | 58 ++++
drivers/net/ntnic/ntutil/nt_util.h | 8 +
8 files changed, 1042 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index e16dcd478f..de662c4ed1 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -367,6 +367,18 @@ int hw_mod_cat_cfn_flush(struct flow_api_backend_s *be, int start_idx, int count
int hw_mod_cat_cfn_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index, int word_off,
uint32_t value);
/* KCE/KCS/FTE KM */
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -374,6 +386,18 @@ int hw_mod_cat_fte_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_fte_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
enum km_flm_if_select_e if_num, int index, uint32_t *value);
/* KCE/KCS/FTE FLM */
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count);
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value);
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value);
int hw_mod_cat_fte_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
int start_idx, int count);
int hw_mod_cat_fte_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
@@ -384,10 +408,14 @@ int hw_mod_cat_fte_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
int hw_mod_cat_cte_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
uint32_t value);
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value);
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_cat_cot_set(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
@@ -638,7 +666,21 @@ int hw_mod_flm_reset(struct flow_api_backend_s *be);
int hw_mod_flm_control_flush(struct flow_api_backend_s *be);
int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+int hw_mod_flm_status_update(struct flow_api_backend_s *be);
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be);
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value);
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
index 9164ec1ae0..985c821312 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_cat.c
@@ -902,6 +902,95 @@ static int hw_mod_cat_kce_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kce_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kce_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kce_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kce_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kce_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= (be->cat.nb_cat_funcs / 8)) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v18.kce[index].enable_bm, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCE_ENABLE_BM:
+ GET_SET(be->cat.v21.kce[index].enable_bm[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kce_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kce_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kce_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kce_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kce_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* KCS
*/
@@ -925,6 +1014,95 @@ static int hw_mod_cat_kcs_flush(struct flow_api_backend_s *be, enum km_flm_if_se
return be->iface->cat_kcs_flush(be->be_dev, &be->cat, km_if_idx, start_idx, count);
}
+int hw_mod_cat_kcs_km_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 0, start_idx, count);
+}
+
+int hw_mod_cat_kcs_flm_flush(struct flow_api_backend_s *be, enum km_flm_if_select_e if_num,
+ int start_idx, int count)
+{
+ return hw_mod_cat_kcs_flush(be, if_num, 1, start_idx, count);
+}
+
+static int hw_mod_cat_kcs_mod(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int km_if_id, int index,
+ uint32_t *value, int get)
+{
+ if ((unsigned int)index >= be->cat.nb_cat_funcs) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ /* find KM module */
+ int km_if_idx = find_km_flm_module_interface_index(be, if_num, km_if_id);
+
+ if (km_if_idx < 0)
+ return km_if_idx;
+
+ switch (_VER_) {
+ case 18:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v18.kcs[index].category, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 18 */
+ case 21:
+ switch (field) {
+ case HW_CAT_KCS_CATEGORY:
+ GET_SET(be->cat.v21.kcs[index].category[km_if_idx], value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ /* end case 21 */
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_cat_kcs_km_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_km_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 0, index, value, 1);
+}
+
+int hw_mod_cat_kcs_flm_set(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, &value, 0);
+}
+
+int hw_mod_cat_kcs_flm_get(struct flow_api_backend_s *be, enum hw_cat_e field,
+ enum km_flm_if_select_e if_num, int index, uint32_t *value)
+{
+ return hw_mod_cat_kcs_mod(be, field, if_num, 1, index, value, 1);
+}
+
/*
* FTE
*/
@@ -1094,6 +1272,12 @@ int hw_mod_cat_cte_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cte_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cte_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cte_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cts_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
int addr_size = (_VER_ < 15) ? 8 : ((be->cat.cts_num + 1) / 2);
@@ -1154,6 +1338,12 @@ int hw_mod_cat_cts_set(struct flow_api_backend_s *be, enum hw_cat_e field, int i
return hw_mod_cat_cts_mod(be, field, index, &value, 0);
}
+int hw_mod_cat_cts_get(struct flow_api_backend_s *be, enum hw_cat_e field, int index,
+ uint32_t *value)
+{
+ return hw_mod_cat_cts_mod(be, field, index, value, 1);
+}
+
int hw_mod_cat_cot_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 8c1f3f2d96..f5eaea7c4e 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -293,11 +293,268 @@ int hw_mod_flm_control_set(struct flow_api_backend_s *be, enum hw_flm_e field, u
return hw_mod_flm_control_mod(be, field, &value, 0);
}
+int hw_mod_flm_status_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_status_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_status_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STATUS_CALIB_SUCCESS:
+ GET_SET(be->flm.v25.status->calib_success, value);
+ break;
+
+ case HW_FLM_STATUS_CALIB_FAIL:
+ GET_SET(be->flm.v25.status->calib_fail, value);
+ break;
+
+ case HW_FLM_STATUS_INITDONE:
+ GET_SET(be->flm.v25.status->initdone, value);
+ break;
+
+ case HW_FLM_STATUS_IDLE:
+ GET_SET(be->flm.v25.status->idle, value);
+ break;
+
+ case HW_FLM_STATUS_CRITICAL:
+ GET_SET(be->flm.v25.status->critical, value);
+ break;
+
+ case HW_FLM_STATUS_PANIC:
+ GET_SET(be->flm.v25.status->panic, value);
+ break;
+
+ case HW_FLM_STATUS_CRCERR:
+ GET_SET(be->flm.v25.status->crcerr, value);
+ break;
+
+ case HW_FLM_STATUS_EFT_BP:
+ GET_SET(be->flm.v25.status->eft_bp, value);
+ break;
+
+ case HW_FLM_STATUS_CACHE_BUFFER_CRITICAL:
+ GET_SET(be->flm.v25.status->cache_buf_critical, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_status_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_status_mod(be, field, value, 1);
+}
+
int hw_mod_flm_scan_flush(struct flow_api_backend_s *be)
{
return be->iface->flm_scan_flush(be->be_dev, &be->flm);
}
+static int hw_mod_flm_scan_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCAN_I:
+ GET_SET(be->flm.v25.scan->i, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scan_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_scan_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_load_bin_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_load_bin_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_load_bin_mod(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_LOAD_BIN:
+ GET_SET(be->flm.v25.load_bin->bin, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_load_bin_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_load_bin_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_prio_flush(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_prio_flush(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_prio_mod(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value,
+ int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PRIO_LIMIT0:
+ GET_SET(be->flm.v25.prio->limit0, value);
+ break;
+
+ case HW_FLM_PRIO_FT0:
+ GET_SET(be->flm.v25.prio->ft0, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT1:
+ GET_SET(be->flm.v25.prio->limit1, value);
+ break;
+
+ case HW_FLM_PRIO_FT1:
+ GET_SET(be->flm.v25.prio->ft1, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT2:
+ GET_SET(be->flm.v25.prio->limit2, value);
+ break;
+
+ case HW_FLM_PRIO_FT2:
+ GET_SET(be->flm.v25.prio->ft2, value);
+ break;
+
+ case HW_FLM_PRIO_LIMIT3:
+ GET_SET(be->flm.v25.prio->limit3, value);
+ break;
+
+ case HW_FLM_PRIO_FT3:
+ GET_SET(be->flm.v25.prio->ft3, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_prio_set(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t value)
+{
+ return hw_mod_flm_prio_mod(be, field, &value, 0);
+}
+
+int hw_mod_flm_pst_flush(struct flow_api_backend_s *be, int start_idx, int count)
+{
+ if (count == ALL_ENTRIES)
+ count = be->flm.nb_pst_profiles;
+
+ if ((unsigned int)(start_idx + count) > be->flm.nb_pst_profiles) {
+ INDEX_TOO_LARGE_LOG;
+ return INDEX_TOO_LARGE;
+ }
+
+ return be->iface->flm_pst_flush(be->be_dev, &be->flm, start_idx, count);
+}
+
+static int hw_mod_flm_pst_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_PST_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.pst[index], (uint8_t)*value,
+ sizeof(struct flm_v25_pst_s));
+ break;
+
+ case HW_FLM_PST_BP:
+ GET_SET(be->flm.v25.pst[index].bp, value);
+ break;
+
+ case HW_FLM_PST_PP:
+ GET_SET(be->flm.v25.pst[index].pp, value);
+ break;
+
+ case HW_FLM_PST_TP:
+ GET_SET(be->flm.v25.pst[index].tp, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_pst_mod(be, field, index, &value, 0);
+}
+
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count)
{
if (count == ALL_ENTRIES)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index dec96fce85..61492090ce 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,14 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_FT_LOOKUP_KEY_A 0
+
+#define HW_DB_FT_TYPE_KM 1
+#define HW_DB_FT_LOOKUP_KEY_A 0
+#define HW_DB_FT_LOOKUP_KEY_C 2
+
+#define HW_DB_FT_TYPE_FLM 0
+#define HW_DB_FT_TYPE_KM 1
/******************************************************************************/
/* Handle */
/******************************************************************************/
@@ -59,6 +67,23 @@ struct hw_db_inline_resource_db {
int ref;
} *cat;
+ struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_resource_db_flm_ft {
+ struct hw_db_inline_flm_ft_data data;
+ struct hw_db_flm_ft idx;
+ int ref;
+ } *ft;
+
+ struct hw_db_inline_resource_db_flm_match_set {
+ struct hw_db_match_set_idx idx;
+ int ref;
+ } *match_set;
+
+ struct hw_db_inline_resource_db_flm_cfn_map {
+ int cfn_idx;
+ } *cfn_map;
+ } *flm;
+
struct hw_db_inline_resource_db_km_rcp {
struct hw_db_inline_km_rcp_data data;
int ref;
@@ -70,6 +95,7 @@ struct hw_db_inline_resource_db {
} *km;
uint32_t nb_cat;
+ uint32_t nb_flm_ft;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -173,6 +199,13 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
}
*db_handle = db;
+
+ /* Preset data */
+
+ db->flm[0].ft[1].idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ db->flm[0].ft[1].idx.id1 = 1;
+ db->flm[0].ft[1].ref = 1;
+
return 0;
}
@@ -235,6 +268,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ hw_db_inline_flm_ft_deref(ndev, db_handle,
+ *(struct hw_db_flm_ft *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_KM_RCP:
hw_db_inline_km_deref(ndev, db_handle, *(struct hw_db_km_idx *)&idxs[i]);
break;
@@ -286,6 +324,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_FT:
+ return NULL; /* FTs can't be easily looked up */
+
case HW_DB_IDX_TYPE_KM_RCP:
return &db->km[idxs[i].id1].data;
@@ -307,6 +348,61 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
/* Filter */
/******************************************************************************/
+/*
+ * lookup refers to key A/B/C/D, and can have values 0, 1, 2, and 3.
+ */
+static void hw_db_set_ft(struct flow_nic_dev *ndev, int type, int cfn_index, int lookup,
+ int flow_type, int enable)
+{
+ (void)type;
+ (void)enable;
+
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index = (8 * flow_type + cfn_index / cat_funcs) * max_lookups + lookup;
+ int fte_field = cfn_index % cat_funcs;
+
+ uint32_t current_bm = 0;
+ uint32_t fte_field_bm = 1 << fte_field;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST, fte_index,
+ ¤t_bm);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t final_bm = enable ? (fte_field_bm | current_bm) : (~fte_field_bm & current_bm);
+
+ if (current_bm != final_bm) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index, final_bm);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
/*
* Setup a filter to match:
* All packets in CFN checks
@@ -348,6 +444,17 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
if (hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1))
return -1;
+ /* KM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match all FTs for look-up A */
+ for (int i = 0; i < 16; ++i)
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, i, 1);
+
+ /* FLM: Match FT=ft_argument for look-up C */
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft, 1);
+
/* Make all CFN checks TRUE */
if (hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0))
return -1;
@@ -1252,6 +1359,133 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+/******************************************************************************/
+/* FLM FT */
+/******************************************************************************/
+
+static int hw_db_inline_flm_ft_compare(const struct hw_db_inline_flm_ft_data *data1,
+ const struct hw_db_inline_flm_ft_data *data2)
+{
+ return data1->is_group_zero == data2->is_group_zero && data1->jump == data2->jump &&
+ data1->action_set.raw == data2->action_set.raw;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ if (data->is_group_zero) {
+ idx.error = 1;
+ return idx;
+ }
+
+ if (flm_rcp->ft[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->group];
+ struct hw_db_flm_ft idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_FT;
+ idx.id1 = 0;
+ idx.id2 = data->group & 0xff;
+
+ /* RCP 0 always uses FT 1; i.e. use unhandled FT for disabled RCP */
+ if (data->group == 0) {
+ idx.id1 = 1;
+ return idx;
+ }
+
+ if (data->is_group_zero) {
+ idx.id3 = 1;
+ return idx;
+ }
+
+ /* FLM_FT records 0, 1 and last (15) are reserved */
+ /* NOTE: RES_FLM_FLOW_TYPE resource is global and it cannot be used in _add() and _deref()
+ * to track usage of FLM_FT recipes which are group specific.
+ */
+ for (uint32_t i = 2; i < db->nb_flm_ft; ++i) {
+ if (!found && flm_rcp->ft[i].ref <= 0 &&
+ !flow_nic_is_resource_used(ndev, RES_FLM_FLOW_TYPE, i)) {
+ found = 1;
+ idx.id1 = i;
+ }
+
+ if (flm_rcp->ft[i].ref > 0 &&
+ hw_db_inline_flm_ft_compare(data, &flm_rcp->ft[i].data)) {
+ idx.id1 = i;
+ hw_db_inline_flm_ft_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&flm_rcp->ft[idx.id1].data, data, sizeof(struct hw_db_inline_flm_ft_data));
+ flm_rcp->ft[idx.id1].idx.raw = idx.raw;
+ flm_rcp->ft[idx.id1].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error && idx.id3 == 0)
+ db->flm[idx.id2].ft[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx)
+{
+ (void)ndev;
+ (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+
+ if (idx.error || idx.id2 == 0 || idx.id3 > 0)
+ return;
+
+ flm_rcp = &db->flm[idx.id2];
+
+ flm_rcp->ft[idx.id1].ref -= 1;
+
+ if (flm_rcp->ft[idx.id1].ref > 0)
+ return;
+
+ flm_rcp->ft[idx.id1].ref = 0;
+ memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
+}
/******************************************************************************/
/* HSH */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 18d959307e..a520ae1769 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -32,6 +32,10 @@ struct hw_db_idx {
HW_DB_IDX;
};
+struct hw_db_match_set_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_action_set_idx {
HW_DB_IDX;
};
@@ -106,6 +110,13 @@ struct hw_db_tpe_ext_idx {
HW_DB_IDX;
};
+struct hw_db_flm_idx {
+ HW_DB_IDX;
+};
+struct hw_db_flm_ft {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -128,6 +139,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE_EXT,
HW_DB_IDX_TYPE_KM_RCP,
+ HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -211,6 +223,17 @@ struct hw_db_inline_km_ft_data {
struct hw_db_action_set_idx action_set;
};
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+ struct hw_db_action_set_idx action_set;
+};
+
/**/
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle);
@@ -277,6 +300,16 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+
+struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_ft_data *data);
+void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_ft idx);
+void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_ft idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 07801b42ff..46ea70df20 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -11,6 +11,7 @@
#include "flow_api.h"
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
#include "stream_binary_flow_api.h"
@@ -47,6 +48,128 @@ static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
return -1;
}
+/*
+ * Flow Matcher functionality
+ */
+
+static int flm_sdram_calibrate(struct flow_nic_dev *ndev)
+{
+ int success = 0;
+ uint32_t fail_value = 0;
+ uint32_t value = 0;
+
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_PRESET_ALL, 0x0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_SPLIT_SDRAM_USAGE, 0x10);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for ddr4 calibration/init done */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_SUCCESS, &value);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_CALIB_FAIL, &fail_value);
+
+ if (value & 0x80000000) {
+ success = 1;
+ break;
+ }
+
+ if (fail_value != 0)
+ break;
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - SDRAM calibration failed");
+ NT_LOG(ERR, FILTER,
+ "Calibration status: success 0x%08" PRIx32 " - fail 0x%08" PRIx32,
+ value, fail_value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
+{
+ int success = 0;
+
+ /*
+ * Make sure no lookup is performed during init, i.e.
+ * disable every category and disable FLM
+ */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Wait for FLM to enter Idle state */
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_IDLE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER, "FLM initialization failed - Never idle");
+ return -1;
+ }
+
+ success = 0;
+
+ /* Start SDRAM initialization */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x1);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ for (uint32_t i = 0; i < 1000000; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_status_update(&ndev->be);
+ hw_mod_flm_status_get(&ndev->be, HW_FLM_STATUS_INITDONE, &value);
+
+ if (value) {
+ success = 1;
+ break;
+ }
+
+ nt_os_wait_usec(1);
+ }
+
+ if (!success) {
+ NT_LOG(ERR, FILTER,
+ "FLM initialization failed - SDRAM initialization incomplete");
+ return -1;
+ }
+
+ /* Set the INIT value back to zero to clear the bit in the SW register cache */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_INIT, 0x0);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Enable FLM */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, enable);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ int nb_rpp_per_ps = ndev->be.flm.nb_rpp_clock_in_ps;
+ int nb_load_aps_max = ndev->be.flm.nb_load_aps_max;
+ uint32_t scan_i_value = 0;
+
+ if (NTNIC_SCANNER_LOAD > 0) {
+ scan_i_value = (1 / (nb_rpp_per_ps * 0.000000000001)) /
+ (nb_load_aps_max * NTNIC_SCANNER_LOAD);
+ }
+
+ hw_mod_flm_scan_set(&ndev->be, HW_FLM_SCAN_I, scan_i_value);
+ hw_mod_flm_scan_flush(&ndev->be);
+
+ return 0;
+}
+
+
+
struct flm_flow_key_def_s {
union {
struct {
@@ -2354,11 +2477,11 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
const struct nic_flow_def *fd,
const struct hw_db_inline_qsl_data *qsl_data,
const struct hw_db_inline_hsh_data *hsh_data,
- uint32_t group __rte_unused,
+ uint32_t group,
uint32_t local_idxs[],
uint32_t *local_idx_counter,
- uint16_t *flm_rpl_ext_ptr __rte_unused,
- uint32_t *flm_ft __rte_unused,
+ uint16_t *flm_rpl_ext_ptr,
+ uint32_t *flm_ft,
uint32_t *flm_scrub __rte_unused,
struct rte_flow_error *error)
{
@@ -2507,6 +2630,25 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 0,
+ .group = group,
+ };
+ struct hw_db_flm_ft flm_ft_idx = empty_pattern
+ ? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
+ : hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ local_idxs[(*local_idx_counter)++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_ft)
+ *flm_ft = flm_ft_idx.id1;
+
return 0;
}
@@ -2514,7 +2656,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
const struct rte_flow_attr *attr,
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
- uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused,
+ uint32_t num_dest_port, uint32_t num_queues,
uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
struct flm_flow_key_def_s *key_def __rte_unused)
{
@@ -2808,6 +2950,21 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup FLM FT */
+ struct hw_db_inline_flm_ft_data flm_ft_data = {
+ .is_group_zero = 1,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ };
+ struct hw_db_flm_ft flm_ft_idx =
+ hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
+ fh->db_idxs[fh->db_idx_counter++] = flm_ft_idx.raw;
+
+ if (flm_ft_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM FT resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
nic_insert_flow(dev->ndev, fh);
}
@@ -3024,6 +3181,63 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
NT_VIOLATING_MBR_QSL) < 0)
goto err_exit0;
+ /* FLM */
+ if (flm_sdram_calibrate(ndev) < 0)
+ goto err_exit0;
+
+ if (flm_sdram_reset(ndev, 1) < 0)
+ goto err_exit0;
+
+ /* Learn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LDS, 0);
+ /* Learn fail status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LFS, 1);
+ /* Learn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_LIS, 1);
+ /* Unlearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UDS, 0);
+ /* Unlearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_UIS, 0);
+ /* Relearn done status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RDS, 0);
+ /* Relearn ignore status */
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RIS, 0);
+ hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_RBL, 4);
+ hw_mod_flm_control_flush(&ndev->be);
+
+ /* Set the sliding windows size for flm load */
+ uint32_t bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) /
+ (32ULL * ndev->be.flm.nb_rpp_clock_in_ps)) -
+ 1ULL);
+ hw_mod_flm_load_bin_set(&ndev->be, HW_FLM_LOAD_BIN, bin);
+ hw_mod_flm_load_bin_flush(&ndev->be);
+
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT0,
+ 0); /* Drop at 100% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT0, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT1,
+ 14); /* Drop at 87,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT1, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT2,
+ 10); /* Drop at 62,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT2, 1);
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_LIMIT3,
+ 6); /* Drop at 37,5% FIFO fill level */
+ hw_mod_flm_prio_set(&ndev->be, HW_FLM_PRIO_FT3, 1);
+ hw_mod_flm_prio_flush(&ndev->be);
+
+ /* TODO How to set and use these limits */
+ for (uint32_t i = 0; i < ndev->be.flm.nb_pst_profiles; ++i) {
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_BP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_PP, i,
+ NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT);
+ hw_mod_flm_pst_set(&ndev->be, HW_FLM_PST_TP, i,
+ NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT);
+ }
+
+ hw_mod_flm_pst_flush(&ndev->be, 0, ALL_ENTRIES);
+
ndev->id_table_handle = ntnic_id_table_create();
if (ndev->id_table_handle == NULL)
@@ -3052,6 +3266,8 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
#endif
if (ndev->flow_mgnt_prepared) {
+ flm_sdram_reset(ndev, 0);
+
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
new file mode 100644
index 0000000000..8ba8b8f67a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -0,0 +1,58 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
+#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+
+/*
+ * Statistics are generated each time the byte counter crosses a limit.
+ * If BYTE_LIMIT is zero then the byte counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_LIMIT + 15) bytes
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(8 + 15) = 2^23 ~~ 8MB
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_LIMIT 8
+
+/*
+ * Statistics are generated each time the packet counter crosses a limit.
+ * If PKT_LIMIT is zero then the packet counter does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(PKT_LIMIT + 11) pkts
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(5 + 11) = 2^16 pkts ~~ 64K pkts
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_PKT_LIMIT 5
+
+/*
+ * Statistics are generated each time flow time (measured in ns) crosses a
+ * limit.
+ * If BYTE_TIMEOUT is zero then the flow time does not trigger statistics
+ * generation.
+ *
+ * Format: 2^(BYTE_TIMEOUT + 15) ns
+ * Valid range: 0 to 31
+ *
+ * Example: 2^(23 + 15) = 2^38 ns ~~ 275 sec
+ */
+#define NTNIC_FLOW_PERIODIC_STATS_BYTE_TIMEOUT 23
+
+/*
+ * This define sets the percentage of the full processing capacity
+ * being reserved for scan operations. The scanner is responsible
+ * for detecting aged out flows and meters with statistics timeout.
+ *
+ * A high scanner load percentage will make this detection more precise
+ * but will also give lower packet processing capacity.
+ *
+ * The percentage is given as a decimal number, e.g. 0.01 for 1%, which is the recommended value.
+ */
+#define NTNIC_SCANNER_LOAD 0.01
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index 71ecd6c68c..a482fb43ad 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -16,6 +16,14 @@
#define ARRAY_SIZE(arr) RTE_DIM(arr)
#endif
+/*
+ * Windows size in seconds for measuring FLM load
+ * and Port load.
+ * The windows size must max be 3 min in order to
+ * prevent overflow.
+ */
+#define FLM_LOAD_WINDOWS_SIZE 2ULL
+
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
#define PCIIDENT_TO_BUSNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 8) & 0xFFU))
#define PCIIDENT_TO_DEVNR(pci_ident) ((uint8_t)(((unsigned int)(pci_ident) >> 3) & 0x1FU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 34/80] net/ntnic: add FLM RCP module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (32 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 33/80] net/ntnic: add FLM module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 35/80] net/ntnic: add learn flow queue handling Serhii Iliushyk
` (45 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup
and programming engine which supported exact match lookup
in line-rate of up to hundreds of millions of flows.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 133 ++++++++++++
.../profile_inline/flow_api_hw_db_inline.c | 195 +++++++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 20 ++
.../profile_inline/flow_api_profile_inline.c | 42 +++-
5 files changed, 390 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index de662c4ed1..13722c30a9 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -683,6 +683,10 @@ int hw_mod_flm_pst_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
uint32_t value);
int hw_mod_flm_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value);
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f5eaea7c4e..0a7e90c04f 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -579,3 +579,136 @@ int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int cou
}
return be->iface->flm_scrub_flush(be->be_dev, &be->flm, start_idx, count);
}
+
+static int hw_mod_flm_rcp_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_RCP_PRESET_ALL:
+ if (get) {
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ memset(&be->flm.v25.rcp[index], (uint8_t)*value,
+ sizeof(struct flm_v25_rcp_s));
+ break;
+
+ case HW_FLM_RCP_LOOKUP:
+ GET_SET(be->flm.v25.rcp[index].lookup, value);
+ break;
+
+ case HW_FLM_RCP_QW0_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw0_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW0_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw0_ofs, value);
+ break;
+
+ case HW_FLM_RCP_QW0_SEL:
+ GET_SET(be->flm.v25.rcp[index].qw0_sel, value);
+ break;
+
+ case HW_FLM_RCP_QW4_DYN:
+ GET_SET(be->flm.v25.rcp[index].qw4_dyn, value);
+ break;
+
+ case HW_FLM_RCP_QW4_OFS:
+ GET_SET(be->flm.v25.rcp[index].qw4_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw8_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW8_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw8_ofs, value);
+ break;
+
+ case HW_FLM_RCP_SW8_SEL:
+ GET_SET(be->flm.v25.rcp[index].sw8_sel, value);
+ break;
+
+ case HW_FLM_RCP_SW9_DYN:
+ GET_SET(be->flm.v25.rcp[index].sw9_dyn, value);
+ break;
+
+ case HW_FLM_RCP_SW9_OFS:
+ GET_SET(be->flm.v25.rcp[index].sw9_ofs, value);
+ break;
+
+ case HW_FLM_RCP_MASK:
+ if (get) {
+ memcpy(value, be->flm.v25.rcp[index].mask,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+
+ } else {
+ memcpy(be->flm.v25.rcp[index].mask, value,
+ sizeof(((struct flm_v25_rcp_s *)0)->mask));
+ }
+
+ break;
+
+ case HW_FLM_RCP_KID:
+ GET_SET(be->flm.v25.rcp[index].kid, value);
+ break;
+
+ case HW_FLM_RCP_OPN:
+ GET_SET(be->flm.v25.rcp[index].opn, value);
+ break;
+
+ case HW_FLM_RCP_IPN:
+ GET_SET(be->flm.v25.rcp[index].ipn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_DYN:
+ GET_SET(be->flm.v25.rcp[index].byt_dyn, value);
+ break;
+
+ case HW_FLM_RCP_BYT_OFS:
+ GET_SET(be->flm.v25.rcp[index].byt_ofs, value);
+ break;
+
+ case HW_FLM_RCP_TXPLM:
+ GET_SET(be->flm.v25.rcp[index].txplm, value);
+ break;
+
+ case HW_FLM_RCP_AUTO_IPV4_MASK:
+ GET_SET(be->flm.v25.rcp[index].auto_ipv4_mask, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value)
+{
+ if (field != HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, value, 0);
+}
+
+int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ if (field == HW_FLM_RCP_MASK)
+ return UNSUP_VER;
+
+ return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 61492090ce..0ae058b91e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -68,6 +68,9 @@ struct hw_db_inline_resource_db {
} *cat;
struct hw_db_inline_resource_db_flm_rcp {
+ struct hw_db_inline_flm_rcp_data data;
+ int ref;
+
struct hw_db_inline_resource_db_flm_ft {
struct hw_db_inline_flm_ft_data data;
struct hw_db_flm_ft idx;
@@ -96,6 +99,7 @@ struct hw_db_inline_resource_db {
uint32_t nb_cat;
uint32_t nb_flm_ft;
+ uint32_t nb_flm_rcp;
uint32_t nb_km_ft;
uint32_t nb_km_rcp;
@@ -164,6 +168,42 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+
+ db->nb_flm_ft = ndev->be.cat.nb_flow_types;
+ db->nb_flm_rcp = ndev->be.flm.nb_categories;
+ db->flm = calloc(db->nb_flm_rcp, sizeof(struct hw_db_inline_resource_db_flm_rcp));
+
+ if (db->flm == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ db->flm[i].ft =
+ calloc(db->nb_flm_ft, sizeof(struct hw_db_inline_resource_db_flm_ft));
+
+ if (db->flm[i].ft == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].match_set =
+ calloc(db->nb_cat, sizeof(struct hw_db_inline_resource_db_flm_match_set));
+
+ if (db->flm[i].match_set == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
+ db->flm[i].cfn_map = calloc(db->nb_cat * db->nb_flm_ft,
+ sizeof(struct hw_db_inline_resource_db_flm_cfn_map));
+
+ if (db->flm[i].cfn_map == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+ }
+
db->nb_km_ft = ndev->be.cat.nb_flow_types;
db->nb_km_rcp = ndev->be.km.nb_categories;
db->km = calloc(db->nb_km_rcp, sizeof(struct hw_db_inline_resource_db_km_rcp));
@@ -222,6 +262,16 @@ void hw_db_inline_destroy(void *db_handle)
free(db->cat);
+ if (db->flm) {
+ for (uint32_t i = 0; i < db->nb_flm_rcp; ++i) {
+ free(db->flm[i].ft);
+ free(db->flm[i].match_set);
+ free(db->flm[i].cfn_map);
+ }
+
+ free(db->flm);
+ }
+
if (db->km) {
for (uint32_t i = 0; i < db->nb_km_rcp; ++i)
free(db->km[i].ft);
@@ -268,6 +318,10 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
*(struct hw_db_tpe_ext_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ hw_db_inline_flm_deref(ndev, db_handle, *(struct hw_db_flm_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_FLM_FT:
hw_db_inline_flm_ft_deref(ndev, db_handle,
*(struct hw_db_flm_ft *)&idxs[i]);
@@ -324,6 +378,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_TPE_EXT:
return &db->tpe_ext[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_RCP:
+ return &db->flm[idxs[i].id1].data;
+
case HW_DB_IDX_TYPE_FLM_FT:
return NULL; /* FTs can't be easily looked up */
@@ -481,6 +538,20 @@ int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id,
return 0;
}
+static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int flm_rcp)
+{
+ uint32_t flm_mask[10];
+ memset(flm_mask, 0xff, sizeof(flm_mask));
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, flm_rcp, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, flm_rcp, 1);
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, flm_rcp, flm_mask);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, flm_rcp, flm_rcp + 2);
+
+ hw_mod_flm_rcp_flush(&ndev->be, flm_rcp, 1);
+}
+
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1268,10 +1339,17 @@ void hw_db_inline_km_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_d
void hw_db_inline_km_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_km_idx idx)
{
(void)ndev;
- (void)db_handle;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
if (idx.error)
return;
+
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0, sizeof(struct hw_db_inline_km_rcp_data));
+ db->flm[idx.id1].ref = 0;
+ }
}
/******************************************************************************/
@@ -1359,6 +1437,121 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
km_rcp->ft[cat_offset + idx.id1].ref = 0;
}
}
+
+/******************************************************************************/
+/* FLM RCP */
+/******************************************************************************/
+
+static int hw_db_inline_flm_compare(const struct hw_db_inline_flm_rcp_data *data1,
+ const struct hw_db_inline_flm_rcp_data *data2)
+{
+ if (data1->qw0_dyn != data2->qw0_dyn || data1->qw0_ofs != data2->qw0_ofs ||
+ data1->qw4_dyn != data2->qw4_dyn || data1->qw4_ofs != data2->qw4_ofs ||
+ data1->sw8_dyn != data2->sw8_dyn || data1->sw8_ofs != data2->sw8_ofs ||
+ data1->sw9_dyn != data2->sw9_dyn || data1->sw9_ofs != data2->sw9_ofs ||
+ data1->outer_prot != data2->outer_prot || data1->inner_prot != data2->inner_prot) {
+ return 0;
+ }
+
+ for (int i = 0; i < 10; ++i)
+ if (data1->mask[i] != data2->mask[i])
+ return 0;
+
+ return 1;
+}
+
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_idx idx = { .raw = 0 };
+
+ idx.type = HW_DB_IDX_TYPE_FLM_RCP;
+ idx.id1 = group;
+
+ if (group == 0)
+ return idx;
+
+ if (db->flm[idx.id1].ref > 0) {
+ if (!hw_db_inline_flm_compare(data, &db->flm[idx.id1].data)) {
+ idx.error = 1;
+ return idx;
+ }
+
+ hw_db_inline_flm_ref(ndev, db, idx);
+ return idx;
+ }
+
+ db->flm[idx.id1].ref = 1;
+ memcpy(&db->flm[idx.id1].data, data, sizeof(struct hw_db_inline_flm_rcp_data));
+
+ {
+ uint32_t flm_mask[10] = {
+ data->mask[0], /* SW9 */
+ data->mask[1], /* SW8 */
+ data->mask[5], data->mask[4], data->mask[3], data->mask[2], /* QW4 */
+ data->mask[9], data->mask[8], data->mask[7], data->mask[6], /* QW0 */
+ };
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, idx.id1, 0x0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_LOOKUP, idx.id1, 1);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_DYN, idx.id1, data->qw0_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_OFS, idx.id1, data->qw0_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW0_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_DYN, idx.id1, data->qw4_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_QW4_OFS, idx.id1, data->qw4_ofs);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_DYN, idx.id1, data->sw8_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_OFS, idx.id1, data->sw8_ofs);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW8_SEL, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_DYN, idx.id1, data->sw9_dyn);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_SW9_OFS, idx.id1, data->sw9_ofs);
+
+ hw_mod_flm_rcp_set_mask(&ndev->be, HW_FLM_RCP_MASK, idx.id1, flm_mask);
+
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_KID, idx.id1, idx.id1 + 2);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_OPN, idx.id1, data->outer_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_IPN, idx.id1, data->inner_prot ? 1 : 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_DYN, idx.id1, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_BYT_OFS, idx.id1, -20);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_TXPLM, idx.id1, UINT32_MAX);
+
+ hw_mod_flm_rcp_flush(&ndev->be, idx.id1, 1);
+ }
+
+ return idx;
+}
+
+void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->flm[idx.id1].ref += 1;
+}
+
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ if (idx.id1 > 0) {
+ db->flm[idx.id1].ref -= 1;
+
+ if (db->flm[idx.id1].ref <= 0) {
+ memset(&db->flm[idx.id1].data, 0x0,
+ sizeof(struct hw_db_inline_flm_rcp_data));
+ db->flm[idx.id1].ref = 0;
+
+ hw_db_inline_setup_default_flm_rcp(ndev, idx.id1);
+ }
+ }
+}
+
/******************************************************************************/
/* FLM FT */
/******************************************************************************/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a520ae1769..9820225ffa 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -138,6 +138,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_TPE,
HW_DB_IDX_TYPE_TPE_EXT,
+ HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
HW_DB_IDX_TYPE_KM_FT,
@@ -165,6 +166,22 @@ struct hw_db_inline_cat_data {
uint8_t ip_prot_tunnel;
};
+struct hw_db_inline_flm_rcp_data {
+ uint64_t qw0_dyn : 5;
+ uint64_t qw0_ofs : 8;
+ uint64_t qw4_dyn : 5;
+ uint64_t qw4_ofs : 8;
+ uint64_t sw8_dyn : 5;
+ uint64_t sw8_ofs : 8;
+ uint64_t sw9_dyn : 5;
+ uint64_t sw9_ofs : 8;
+ uint64_t outer_prot : 1;
+ uint64_t inner_prot : 1;
+ uint64_t padding : 10;
+
+ uint32_t mask[10];
+};
+
struct hw_db_inline_qsl_data {
uint32_t discard : 1;
uint32_t drop : 1;
@@ -300,7 +317,10 @@ void hw_db_inline_km_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struct
/**/
+struct hw_db_flm_idx hw_db_inline_flm_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_flm_rcp_data *data, int group);
void hw_db_inline_flm_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
+void hw_db_inline_flm_deref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_flm_idx idx);
struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_flm_ft_data *data);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 46ea70df20..7a0cb1f9c4 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -101,6 +101,11 @@ static int flm_sdram_reset(struct flow_nic_dev *ndev, int enable)
hw_mod_flm_control_set(&ndev->be, HW_FLM_CONTROL_ENABLE, 0x0);
hw_mod_flm_control_flush(&ndev->be);
+ for (uint32_t i = 1; i < ndev->be.flm.nb_categories; ++i)
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, i, 0x0);
+
+ hw_mod_flm_rcp_flush(&ndev->be, 1, ndev->be.flm.nb_categories - 1);
+
/* Wait for FLM to enter Idle state */
for (uint32_t i = 0; i < 1000000; ++i) {
uint32_t value = 0;
@@ -2657,8 +2662,8 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id,
struct rte_flow_error *error, uint32_t port_id,
uint32_t num_dest_port, uint32_t num_queues,
- uint32_t *packet_data, uint32_t *packet_mask __rte_unused,
- struct flm_flow_key_def_s *key_def __rte_unused)
+ uint32_t *packet_data, uint32_t *packet_mask,
+ struct flm_flow_key_def_s *key_def)
{
struct flow_handle *fh = calloc(1, sizeof(struct flow_handle));
@@ -2691,6 +2696,31 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
* Flow for group 1..32
*/
+ /* Setup FLM RCP */
+ struct hw_db_inline_flm_rcp_data flm_data = {
+ .qw0_dyn = key_def->qw0_dyn,
+ .qw0_ofs = key_def->qw0_ofs,
+ .qw4_dyn = key_def->qw4_dyn,
+ .qw4_ofs = key_def->qw4_ofs,
+ .sw8_dyn = key_def->sw8_dyn,
+ .sw8_ofs = key_def->sw8_ofs,
+ .sw9_dyn = key_def->sw9_dyn,
+ .sw9_ofs = key_def->sw9_ofs,
+ .outer_prot = key_def->outer_proto,
+ .inner_prot = key_def->inner_proto,
+ };
+ memcpy(flm_data.mask, packet_mask, sizeof(uint32_t) * 10);
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, &flm_data,
+ attr->group);
+ fh->db_idxs[fh->db_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup Actions */
uint16_t flm_rpl_ext_ptr = 0;
uint32_t flm_ft = 0;
@@ -2703,7 +2733,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
/* Program flow */
- convert_fh_to_fh_flm(fh, packet_data, 2, flm_ft, flm_rpl_ext_ptr,
+ convert_fh_to_fh_flm(fh, packet_data, flm_idx.id1 + 2, flm_ft, flm_rpl_ext_ptr,
flm_scrub, attr->priority & 0x3);
flm_flow_programming(fh, NT_FLM_OP_LEARN);
@@ -3271,6 +3301,12 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_KM_FLOW_TYPE, 0);
flow_nic_free_resource(ndev, RES_KM_CATEGORY, 0);
+ hw_mod_flm_rcp_set(&ndev->be, HW_FLM_RCP_PRESET_ALL, 0, 0);
+ hw_mod_flm_rcp_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 0);
+ flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
+ flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 35/80] net/ntnic: add learn flow queue handling
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (33 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 34/80] net/ntnic: add FLM RCP module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 36/80] net/ntnic: match and action db attributes were added Serhii Iliushyk
` (44 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements thread for handling flow learn queue
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/hw_mod_backend.h | 5 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 33 +++++++
.../flow_api/profile_inline/flm_lrn_queue.c | 42 +++++++++
.../flow_api/profile_inline/flm_lrn_queue.h | 11 +++
.../profile_inline/flow_api_profile_inline.c | 48 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 94 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
8 files changed, 241 insertions(+)
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 13722c30a9..17d5755634 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,11 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt);
+
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
struct hsh_func_s {
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8017aa4fc3..8ebdd98db0 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -14,6 +14,7 @@ typedef struct ntdrv_4ga_s {
char *p_drv_name;
volatile bool b_shutdown;
+ rte_thread_t flm_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 0a7e90c04f..f4c29b8bde 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,3 +712,36 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
+ const uint32_t *value, uint32_t records,
+ uint32_t *handled_records, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ int ret = 0;
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_LRN_DATA:
+ ret = be->iface->flm_lrn_data_flush(be->be_dev, &be->flm, value, records,
+ handled_records,
+ (sizeof(struct flm_v25_lrn_data_s) /
+ sizeof(uint32_t)),
+ inf_word_cnt, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
index ad7efafe08..6e77c28f93 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.c
@@ -13,8 +13,28 @@
#include "flm_lrn_queue.h"
+#define QUEUE_SIZE (1 << 13)
+
#define ELEM_SIZE sizeof(struct flm_v25_lrn_data_s)
+void *flm_lrn_queue_create(void)
+{
+ static_assert((ELEM_SIZE & ~(size_t)3) == ELEM_SIZE, "FLM LEARN struct size");
+ struct rte_ring *q = rte_ring_create_elem("RFQ",
+ ELEM_SIZE,
+ QUEUE_SIZE,
+ SOCKET_ID_ANY,
+ RING_F_MP_HTS_ENQ | RING_F_SC_DEQ);
+ assert(q != NULL);
+ return q;
+}
+
+void flm_lrn_queue_free(void *q)
+{
+ if (q)
+ rte_ring_free(q);
+}
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q)
{
struct rte_ring_zc_data zcd;
@@ -26,3 +46,25 @@ void flm_lrn_queue_release_write_buffer(void *q)
{
rte_ring_enqueue_zc_elem_finish(q, 1);
}
+
+read_record flm_lrn_queue_get_read_buffer(void *q)
+{
+ struct rte_ring_zc_data zcd;
+ read_record rr;
+
+ if (rte_ring_dequeue_zc_burst_elem_start(q, ELEM_SIZE, QUEUE_SIZE, &zcd, NULL) != 0) {
+ rr.num = zcd.n1;
+ rr.p = zcd.ptr1;
+
+ } else {
+ rr.num = 0;
+ rr.p = NULL;
+ }
+
+ return rr;
+}
+
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num)
+{
+ rte_ring_dequeue_zc_elem_finish(q, num);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
index 8cee0c8e78..40558f4201 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_lrn_queue.h
@@ -8,7 +8,18 @@
#include <stdint.h>
+typedef struct read_record {
+ uint32_t *p;
+ uint32_t num;
+} read_record;
+
+void *flm_lrn_queue_create(void);
+void flm_lrn_queue_free(void *q);
+
uint32_t *flm_lrn_queue_get_write_buffer(void *q);
void flm_lrn_queue_release_write_buffer(void *q);
+read_record flm_lrn_queue_get_read_buffer(void *q);
+void flm_lrn_queue_release_read_buffer(void *q, uint32_t num);
+
#endif /* _FLM_LRN_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7a0cb1f9c4..7487b5150e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -39,6 +39,48 @@
static void *flm_lrn_queue_arr;
+static void flm_setup_queues(void)
+{
+ flm_lrn_queue_arr = flm_lrn_queue_create();
+ assert(flm_lrn_queue_arr != NULL);
+}
+
+static void flm_free_queues(void)
+{
+ flm_lrn_queue_free(flm_lrn_queue_arr);
+}
+
+static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
+ uint32_t *sta_word_cnt)
+{
+ read_record r = flm_lrn_queue_get_read_buffer(flm_lrn_queue_arr);
+
+ if (r.num) {
+ uint32_t handled_records = 0;
+
+ if (hw_mod_flm_lrn_data_set_flush(&dev->ndev->be, HW_FLM_FLOW_LRN_DATA, r.p, r.num,
+ &handled_records, inf_word_cnt, sta_word_cnt)) {
+ NT_LOG(ERR, FILTER, "Flow programming failed");
+
+ } else if (handled_records > 0) {
+ flm_lrn_queue_release_read_buffer(flm_lrn_queue_arr, handled_records);
+ }
+ }
+
+ return r.num;
+}
+
+static uint32_t flm_update(struct flow_eth_dev *dev)
+{
+ static uint32_t inf_word_cnt;
+ static uint32_t sta_word_cnt;
+
+ if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
+ return 1;
+
+ return inf_word_cnt + sta_word_cnt;
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -4214,6 +4256,12 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * NT Flow FLM Meter API
+ */
+ .flm_setup_queues = flm_setup_queues,
+ .flm_free_queues = flm_free_queues,
+ .flm_update = flm_update,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a509a8eb51..bfca8f28b1 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -24,6 +24,11 @@
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
+#define THREAD_JOIN(a) rte_thread_join(a, NULL)
+#define THREAD_FUNC static uint32_t
+#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
@@ -120,6 +125,16 @@ store_pdrv(struct drv_s *p_drv)
rte_spinlock_unlock(&hwlock);
}
+static void clear_pdrv(struct drv_s *p_drv)
+{
+ if (p_drv->adapter_no > NUM_ADAPTER_MAX)
+ return;
+
+ rte_spinlock_lock(&hwlock);
+ _g_p_drv[p_drv->adapter_no] = NULL;
+ rte_spinlock_unlock(&hwlock);
+}
+
static struct drv_s *
get_pdrv_from_pci(struct rte_pci_addr addr)
{
@@ -1240,6 +1255,13 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
static void
drv_deinit(struct drv_s *p_drv)
{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return;
+ }
+
const struct adapter_ops *adapter_ops = get_adapter_ops();
if (adapter_ops == NULL) {
@@ -1251,6 +1273,22 @@ drv_deinit(struct drv_s *p_drv)
return;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ fpga_info_t *fpga_info = &p_nt_drv->adapter_info.fpga_info;
+
+ /*
+ * Mark the global pdrv for cleared. Used by some threads to terminate.
+ * 1 second to give the threads a chance to see the termonation.
+ */
+ clear_pdrv(p_drv);
+ nt_os_wait_usec(1000000);
+
+ /* stop statistics threads */
+ p_drv->ntdrv.b_shutdown = true;
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ THREAD_JOIN(p_nt_drv->flm_thread);
+ profile_inline_ops->flm_free_queues();
+ }
/* stop adapter */
adapter_ops->deinit(&p_nt_drv->adapter_info);
@@ -1359,6 +1397,43 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.promiscuous_enable = promiscuous_enable,
};
+/*
+ * Adapter flm stat thread
+ */
+THREAD_FUNC adapter_flm_update_thread_fn(void *context)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: profile_inline module uninitialized", __func__);
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct nt4ga_filter_s *p_nt4ga_filter = &p_adapter_info->nt4ga_filter;
+ struct flow_nic_dev *p_flow_nic_dev = p_nt4ga_filter->mp_flow_device;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: waiting for port configuration",
+ p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (p_flow_nic_dev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ struct flow_eth_dev *dev = p_flow_nic_dev->eth_base;
+
+ NT_LOG(DBG, NTNIC, "%s: %s: begin", p_adapter_info->mp_adapter_id_str, __func__);
+
+ while (!p_drv->ntdrv.b_shutdown)
+ if (profile_inline_ops->flm_update(dev) == 0)
+ nt_os_wait_usec(10);
+
+ NT_LOG(DBG, NTNIC, "%s: %s: end", p_adapter_info->mp_adapter_id_str, __func__);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1369,6 +1444,13 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* Return statement is not necessary here to allow traffic processing by SW */
}
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ /* Return statement is not necessary here to allow traffic processing by SW */
+ }
+
nt_vfio_init();
const struct port_ops *port_ops = get_port_ops();
@@ -1597,6 +1679,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ if (profile_inline_ops != NULL && fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ profile_inline_ops->flm_setup_queues();
+ res = THREAD_CTRL_CREATE(&p_nt_drv->flm_thread, "ntnic-nt_flm_update_thr",
+ adapter_flm_update_thread_fn, (void *)p_drv);
+
+ if (res) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1069be2f85..27d6cbef01 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -256,6 +256,13 @@ struct profile_inline_ops {
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+
+ /*
+ * NT Flow FLM queue API
+ */
+ void (*flm_setup_queues)(void);
+ void (*flm_free_queues)(void);
+ uint32_t (*flm_update)(struct flow_eth_dev *dev);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 36/80] net/ntnic: match and action db attributes were added
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (34 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 35/80] net/ntnic: add learn flow queue handling Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 37/80] net/ntnic: add flow dump feature Serhii Iliushyk
` (43 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Implements match/action dereferencing
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../profile_inline/flow_api_hw_db_inline.c | 795 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 35 +
.../profile_inline/flow_api_profile_inline.c | 55 ++
3 files changed, 885 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 0ae058b91e..52f85b65af 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -9,6 +9,9 @@
#include "flow_api_hw_db_inline.h"
#include "rte_common.h"
+#define HW_DB_INLINE_ACTION_SET_NB 512
+#define HW_DB_INLINE_MATCH_SET_NB 512
+
#define HW_DB_FT_LOOKUP_KEY_A 0
#define HW_DB_FT_TYPE_KM 1
@@ -110,6 +113,20 @@ struct hw_db_inline_resource_db {
int cfn_hw;
int ref;
} *cfn;
+
+ uint32_t cfn_priority_counter;
+ uint32_t set_priority_counter;
+
+ struct hw_db_inline_resource_db_action_set {
+ struct hw_db_inline_action_set_data data;
+ int ref;
+ } action_set[HW_DB_INLINE_ACTION_SET_NB];
+
+ struct hw_db_inline_resource_db_match_set {
+ struct hw_db_inline_match_set_data data;
+ int ref;
+ uint32_t set_priority;
+ } match_set[HW_DB_INLINE_MATCH_SET_NB];
};
int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
@@ -292,6 +309,16 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
case HW_DB_IDX_TYPE_NONE:
break;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ hw_db_inline_match_set_deref(ndev, db_handle,
+ *(struct hw_db_match_set_idx *)&idxs[i]);
+ break;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ hw_db_inline_action_set_deref(ndev, db_handle,
+ *(struct hw_db_action_set_idx *)&idxs[i]);
+ break;
+
case HW_DB_IDX_TYPE_CAT:
hw_db_inline_cat_deref(ndev, db_handle, *(struct hw_db_cat_idx *)&idxs[i]);
break;
@@ -360,6 +387,12 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_NONE:
return NULL;
+ case HW_DB_IDX_TYPE_MATCH_SET:
+ return &db->match_set[idxs[i].ids].data;
+
+ case HW_DB_IDX_TYPE_ACTION_SET:
+ return &db->action_set[idxs[i].ids].data;
+
case HW_DB_IDX_TYPE_CAT:
return &db->cat[idxs[i].ids].data;
@@ -552,6 +585,763 @@ static void hw_db_inline_setup_default_flm_rcp(struct flow_nic_dev *ndev, int fl
}
+static void hw_db_copy_ft(struct flow_nic_dev *ndev, int type, int cfn_dst, int cfn_src,
+ int lookup, int flow_type)
+{
+ const int max_lookups = 4;
+ const int cat_funcs = (int)ndev->be.cat.nb_cat_funcs / 8;
+
+ int fte_index_dst = (8 * flow_type + cfn_dst / cat_funcs) * max_lookups + lookup;
+ int fte_field_dst = cfn_dst % cat_funcs;
+
+ int fte_index_src = (8 * flow_type + cfn_src / cat_funcs) * max_lookups + lookup;
+ int fte_field_src = cfn_src % cat_funcs;
+
+ uint32_t current_bm_dst = 0;
+ uint32_t current_bm_src = 0;
+ uint32_t fte_field_bm_dst = 1 << fte_field_dst;
+ uint32_t fte_field_bm_src = 1 << fte_field_src;
+
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_flm_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, ¤t_bm_dst);
+ hw_mod_cat_fte_km_get(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_src, ¤t_bm_src);
+ break;
+
+ default:
+ break;
+ }
+
+ uint32_t enable = current_bm_src & fte_field_bm_src;
+ uint32_t final_bm_dst = enable ? (fte_field_bm_dst | current_bm_dst)
+ : (~fte_field_bm_dst & current_bm_dst);
+
+ if (current_bm_dst != final_bm_dst) {
+ switch (type) {
+ case HW_DB_FT_TYPE_FLM:
+ hw_mod_cat_fte_flm_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_flm_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ case HW_DB_FT_TYPE_KM:
+ hw_mod_cat_fte_km_set(&ndev->be, HW_CAT_FTE_ENABLE_BM, KM_FLM_IF_FIRST,
+ fte_index_dst, final_bm_dst);
+ hw_mod_cat_fte_km_flush(&ndev->be, KM_FLM_IF_FIRST, fte_index_dst, 1);
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+
+static int hw_db_inline_filter_apply(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id,
+ struct hw_db_match_set_idx match_set_idx,
+ struct hw_db_flm_ft flm_ft_idx,
+ struct hw_db_action_set_idx action_set_idx)
+{
+ (void)match_set_idx;
+ (void)flm_ft_idx;
+
+ const struct hw_db_inline_match_set_data *match_set =
+ &db->match_set[match_set_idx.ids].data;
+ const struct hw_db_inline_cat_data *cat = &db->cat[match_set->cat.ids].data;
+
+ const int km_ft = match_set->km_ft.id1;
+ const int km_rcp = (int)db->km[match_set->km.id1].data.rcp;
+
+ const int flm_ft = flm_ft_idx.id1;
+ const int flm_rcp = flm_ft_idx.id2;
+
+ const struct hw_db_inline_action_set_data *action_set =
+ &db->action_set[action_set_idx.ids].data;
+ const struct hw_db_inline_cot_data *cot = &db->cot[action_set->cot.ids].data;
+
+ const int qsl_hw_id = action_set->qsl.ids;
+ const int slc_lr_hw_id = action_set->slc_lr.ids;
+ const int tpe_hw_id = action_set->tpe.ids;
+ const int hsh_hw_id = action_set->hsh.ids;
+
+ /* Setup default FLM RCP if needed */
+ if (flm_rcp > 0 && db->flm[flm_rcp].ref <= 0)
+ hw_db_inline_setup_default_flm_rcp(ndev, flm_rcp);
+
+ /* Setup CAT.CFN */
+ {
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_INV, cat_hw_id, 0, 0x0);
+
+ /* Protocol checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_ISL, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_CFP, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MAC, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L2, cat_hw_id, 0, cat->ptc_mask_l2);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VNTAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_VLAN, cat_hw_id, 0, cat->vlan_mask);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L3, cat_hw_id, 0, cat->ptc_mask_l3);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_FRAG, cat_hw_id, 0,
+ cat->ptc_mask_frag);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_IP_PROT, cat_hw_id, 0, cat->ip_prot);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_L4, cat_hw_id, 0, cat->ptc_mask_l4);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TUNNEL, cat_hw_id, 0,
+ cat->ptc_mask_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L2, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_VLAN, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_MPLS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L3, cat_hw_id, 0,
+ cat->ptc_mask_l3_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_FRAG, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_IP_PROT, cat_hw_id, 0,
+ cat->ip_prot_tunnel);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PTC_TNL_L4, cat_hw_id, 0,
+ cat->ptc_mask_l4_tunnel);
+
+ /* Error checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_CV, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_FCS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TRUNC, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L3_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_L4_CS, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L3_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_L4_CS, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ERR_TNL_TTL_EXP, cat_hw_id, 0,
+ cat->err_mask_ttl_tunnel);
+
+ /* MAC port check */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_MAC_PORT, cat_hw_id, 0,
+ cat->mac_port_mask);
+
+ /* Pattern match checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMP, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_DCT, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_EXT_INV, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_CMB, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_AND_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_OR_INV, cat_hw_id, 0, -1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_PM_INV, cat_hw_id, 0, -1);
+
+ /* Length checks */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_LC_INV, cat_hw_id, 0, -1);
+
+ /* KM and FLM */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM0_OR, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_KM1_OR, cat_hw_id, 0, 0x3);
+
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 0, cat_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 0, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 1, hsh_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 1, qsl_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 2, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 2,
+ slc_lr_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 3, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 4, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + 5, tpe_hw_id);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + 5, 0);
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id,
+ 0x001 | 0x004 | (qsl_hw_id ? 0x008 : 0) |
+ (slc_lr_hw_id ? 0x020 : 0) | 0x040 |
+ (tpe_hw_id ? 0x400 : 0));
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ km_rcp);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ flm_rcp);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm | (1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, km_ft, 1);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, flm_ft, 1);
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COLOR, cat_hw_id, cot->frag_rcp << 10);
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_KM, cat_hw_id,
+ cot->matcher_color_contrib);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cat_hw_id, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ return 0;
+}
+
+static void hw_db_inline_filter_clear(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ int cat_hw_id)
+{
+ /* Setup CAT.CFN */
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_SET_ALL_DEFAULTS, cat_hw_id, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cat_hw_id, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < 6; ++i) {
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cat_hw_id + i, 0);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cat_hw_id + i, 0);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cat_hw_id, 6);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cat_hw_id, 0);
+ hw_mod_cat_cte_flush(&ndev->be, cat_hw_id, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_KM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bm = 0;
+
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cat_hw_id,
+ 0);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, &bm);
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cat_hw_id / 8, bm & ~(1 << (cat_hw_id % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cat_hw_id / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_A, ft,
+ 0);
+ hw_db_set_ft(ndev, HW_DB_FT_TYPE_FLM, cat_hw_id, HW_DB_FT_LOOKUP_KEY_C, ft,
+ 0);
+ }
+ }
+
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_PRESET_ALL, cat_hw_id, 0);
+ hw_mod_cat_cot_flush(&ndev->be, cat_hw_id, 1);
+}
+
+static void hw_db_inline_filter_copy(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db, int cfn_dst, int cfn_src)
+{
+ uint32_t val = 0;
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_COPY_FROM, cfn_dst, 0, cfn_src);
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x0);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+
+ /* Setup CAT.CTS */
+ {
+ const int offset = ((int)ndev->be.cat.cts_num + 1) / 2;
+
+ for (int i = 0; i < offset; ++i) {
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_A, offset * cfn_dst + i, val);
+ hw_mod_cat_cts_get(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_src + i,
+ &val);
+ hw_mod_cat_cts_set(&ndev->be, HW_CAT_CTS_CAT_B, offset * cfn_dst + i, val);
+ }
+
+ hw_mod_cat_cts_flush(&ndev->be, offset * cfn_dst, offset);
+ }
+
+ /* Setup CAT.CTE */
+ {
+ hw_mod_cat_cte_get(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_src, &val);
+ hw_mod_cat_cte_set(&ndev->be, HW_CAT_CTE_ENABLE_BM, cfn_dst, val);
+ hw_mod_cat_cte_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ /* Setup CAT.KM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_km_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_km_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_km_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_km_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_km_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_km_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_KM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ }
+ }
+
+ /* Setup CAT.FLM */
+ {
+ uint32_t bit_src = 0;
+
+ hw_mod_cat_kcs_flm_get(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_src,
+ &val);
+ hw_mod_cat_kcs_flm_set(&ndev->be, HW_CAT_KCS_CATEGORY, KM_FLM_IF_FIRST, cfn_dst,
+ val);
+ hw_mod_cat_kcs_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst, 1);
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_src / 8, &val);
+ bit_src = (val >> (cfn_src % 8)) & 0x1;
+
+ hw_mod_cat_kce_flm_get(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, &val);
+ val &= ~(1 << (cfn_dst % 8));
+
+ hw_mod_cat_kce_flm_set(&ndev->be, HW_CAT_KCE_ENABLE_BM, KM_FLM_IF_FIRST,
+ cfn_dst / 8, val | (bit_src << (cfn_dst % 8)));
+ hw_mod_cat_kce_flm_flush(&ndev->be, KM_FLM_IF_FIRST, cfn_dst / 8, 1);
+
+ for (int ft = 0; ft < (int)db->nb_flm_ft; ++ft) {
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_A, ft);
+ hw_db_copy_ft(ndev, HW_DB_FT_TYPE_FLM, cfn_dst, cfn_src,
+ HW_DB_FT_LOOKUP_KEY_C, ft);
+ }
+ }
+
+ /* Setup CAT.COT */
+ {
+ hw_mod_cat_cot_set(&ndev->be, HW_CAT_COT_COPY_FROM, cfn_dst, cfn_src);
+ hw_mod_cat_cot_flush(&ndev->be, cfn_dst, 1);
+ }
+
+ hw_mod_cat_cfn_set(&ndev->be, HW_CAT_CFN_ENABLE, cfn_dst, 0, 0x1);
+ hw_mod_cat_cfn_flush(&ndev->be, cfn_dst, 1);
+}
+
+/*
+ * Algorithm for moving CFN entries to make space with respect of priority.
+ * The algorithm will make the fewest possible moves to fit a new CFN entry.
+ */
+static int hw_db_inline_alloc_prioritized_cfn(struct flow_nic_dev *ndev,
+ struct hw_db_inline_resource_db *db,
+ struct hw_db_match_set_idx match_set_idx)
+{
+ const struct hw_db_inline_resource_db_match_set *match_set =
+ &db->match_set[match_set_idx.ids];
+
+ uint64_t priority = ((uint64_t)(match_set->data.priority & 0xff) << 56) |
+ ((uint64_t)(0xffffff - (match_set->set_priority & 0xffffff)) << 32) |
+ (0xffffffff - ++db->cfn_priority_counter);
+
+ int db_cfn_idx = -1;
+
+ struct {
+ uint64_t priority;
+ uint32_t idx;
+ } sorted_priority[db->nb_cat];
+
+ memset(sorted_priority, 0x0, sizeof(sorted_priority));
+
+ uint32_t in_use_count = 0;
+
+ for (uint32_t i = 1; i < db->nb_cat; ++i) {
+ if (db->cfn[i].ref > 0) {
+ sorted_priority[db->cfn[i].cfn_hw].priority = db->cfn[i].priority;
+ sorted_priority[db->cfn[i].cfn_hw].idx = i;
+ in_use_count += 1;
+
+ } else if (db_cfn_idx == -1) {
+ db_cfn_idx = (int)i;
+ }
+ }
+
+ if (in_use_count >= db->nb_cat - 1)
+ return -1;
+
+ if (in_use_count == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = 1;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ int goal = 1;
+ int free_before = -1000000;
+ int free_after = 1000000;
+ int found_smaller = 0;
+
+ for (int i = 1; i < (int)db->nb_cat; ++i) {
+ if (sorted_priority[i].priority > priority) { /* Bigger */
+ goal = i + 1;
+
+ } else if (sorted_priority[i].priority == 0) { /* Not set */
+ if (found_smaller) {
+ if (free_after > i)
+ free_after = i;
+
+ } else {
+ free_before = i;
+ }
+
+ } else {/* Smaller */
+ found_smaller = 1;
+ }
+ }
+
+ int diff_before = goal - free_before - 1;
+ int diff_after = free_after - goal;
+
+ if (goal < (int)db->nb_cat && sorted_priority[goal].priority == 0) {
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+ return db_cfn_idx;
+ }
+
+ if (diff_after <= diff_before) {
+ for (int i = free_after; i > goal; --i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i - 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+
+ } else {
+ goal -= 1;
+
+ for (int i = free_before; i < goal; ++i) {
+ int *cfn_hw = &db->cfn[sorted_priority[i + 1].idx].cfn_hw;
+ hw_db_inline_filter_copy(ndev, db, i, *cfn_hw);
+ hw_db_inline_filter_clear(ndev, db, *cfn_hw);
+ *cfn_hw = i;
+ }
+ }
+
+ db->cfn[db_cfn_idx].ref = 1;
+ db->cfn[db_cfn_idx].cfn_hw = goal;
+ db->cfn[db_cfn_idx].priority = priority;
+
+ return db_cfn_idx;
+}
+
+static void hw_db_inline_free_prioritized_cfn(struct hw_db_inline_resource_db *db, int cfn_hw)
+{
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (db->cfn[i].cfn_hw == cfn_hw) {
+ memset(&db->cfn[i], 0x0, sizeof(struct hw_db_inline_resource_db_cfn));
+ break;
+ }
+ }
+}
+
+static void hw_db_inline_update_active_filters(struct flow_nic_dev *ndev, void *db_handle,
+ int group)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[group];
+ struct hw_db_inline_resource_db_flm_cfn_map *cell;
+
+ for (uint32_t match_set_idx = 0; match_set_idx < db->nb_cat; ++match_set_idx) {
+ for (uint32_t ft_idx = 0; ft_idx < db->nb_flm_ft; ++ft_idx) {
+ int active = flm_rcp->ft[ft_idx].ref > 0 &&
+ flm_rcp->match_set[match_set_idx].ref > 0;
+ cell = &flm_rcp->cfn_map[match_set_idx * db->nb_flm_ft + ft_idx];
+
+ if (active && cell->cfn_idx == 0) {
+ /* Setup filter */
+ cell->cfn_idx = hw_db_inline_alloc_prioritized_cfn(ndev, db,
+ flm_rcp->match_set[match_set_idx].idx);
+ hw_db_inline_filter_apply(ndev, db, db->cfn[cell->cfn_idx].cfn_hw,
+ flm_rcp->match_set[match_set_idx].idx,
+ flm_rcp->ft[ft_idx].idx,
+ group == 0
+ ? db->match_set[flm_rcp->match_set[match_set_idx]
+ .idx.ids]
+ .data.action_set
+ : flm_rcp->ft[ft_idx].data.action_set);
+ }
+
+ if (!active && cell->cfn_idx > 0) {
+ /* Teardown filter */
+ hw_db_inline_filter_clear(ndev, db, db->cfn[cell->cfn_idx].cfn_hw);
+ hw_db_inline_free_prioritized_cfn(db,
+ db->cfn[cell->cfn_idx].cfn_hw);
+ cell->cfn_idx = 0;
+ }
+ }
+ }
+}
+
+
+/******************************************************************************/
+/* Match set */
+/******************************************************************************/
+
+static int hw_db_inline_match_set_compare(const struct hw_db_inline_match_set_data *data1,
+ const struct hw_db_inline_match_set_data *data2)
+{
+ return data1->cat.raw == data2->cat.raw && data1->km.raw == data2->km.raw &&
+ data1->km_ft.raw == data2->km_ft.raw && data1->jump == data2->jump;
+}
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp = &db->flm[data->jump];
+ struct hw_db_match_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_MATCH_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_MATCH_SET_NB; ++i) {
+ if (!found && db->match_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->match_set[i].ref > 0 &&
+ hw_db_inline_match_set_compare(data, &db->match_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_match_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ found = 0;
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].ref <= 0) {
+ found = 1;
+ flm_rcp->match_set[i].ref = 1;
+ flm_rcp->match_set[i].idx.raw = idx.raw;
+ break;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->match_set[idx.ids].data, data, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 1;
+ db->match_set[idx.ids].set_priority = ++db->set_priority_counter;
+
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
+ return idx;
+}
+
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->match_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_inline_resource_db_flm_rcp *flm_rcp;
+ int jump;
+
+ if (idx.error)
+ return;
+
+ db->match_set[idx.ids].ref -= 1;
+
+ if (db->match_set[idx.ids].ref > 0)
+ return;
+
+ jump = db->match_set[idx.ids].data.jump;
+ flm_rcp = &db->flm[jump];
+
+ for (uint32_t i = 0; i < db->nb_cat; ++i) {
+ if (flm_rcp->match_set[i].idx.raw == idx.raw) {
+ flm_rcp->match_set[i].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, jump);
+ memset(&flm_rcp->match_set[i], 0x0,
+ sizeof(struct hw_db_inline_resource_db_flm_match_set));
+ }
+ }
+
+ memset(&db->match_set[idx.ids].data, 0x0, sizeof(struct hw_db_inline_match_set_data));
+ db->match_set[idx.ids].ref = 0;
+}
+
+/******************************************************************************/
+/* Action set */
+/******************************************************************************/
+
+static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_data *data1,
+ const struct hw_db_inline_action_set_data *data2)
+{
+ if (data1->contains_jump)
+ return data2->contains_jump && data1->jump == data2->jump;
+
+ return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
+ data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
+ data1->hsh.raw == data2->hsh.raw;
+}
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_action_set_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_ACTION_SET;
+
+ for (uint32_t i = 0; i < HW_DB_INLINE_ACTION_SET_NB; ++i) {
+ if (!found && db->action_set[i].ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+
+ if (db->action_set[i].ref > 0 &&
+ hw_db_inline_action_set_compare(data, &db->action_set[i].data)) {
+ idx.ids = i;
+ hw_db_inline_action_set_ref(ndev, db, idx);
+ return idx;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ memcpy(&db->action_set[idx.ids].data, data, sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 1;
+
+ return idx;
+}
+
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->action_set[idx.ids].ref += 1;
+}
+
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->action_set[idx.ids].ref -= 1;
+
+ if (db->action_set[idx.ids].ref <= 0) {
+ memset(&db->action_set[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_action_set_data));
+ db->action_set[idx.ids].ref = 0;
+ }
+}
+
/******************************************************************************/
/* COT */
/******************************************************************************/
@@ -1593,6 +2383,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_default(struct flow_nic_dev *ndev, void
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->jump);
+
return idx;
}
@@ -1647,6 +2439,8 @@ struct hw_db_flm_ft hw_db_inline_flm_ft_add(struct flow_nic_dev *ndev, void *db_
flm_rcp->ft[idx.id1].idx.raw = idx.raw;
flm_rcp->ft[idx.id1].ref = 1;
+ hw_db_inline_update_active_filters(ndev, db, data->group);
+
return idx;
}
@@ -1677,6 +2471,7 @@ void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle, struc
return;
flm_rcp->ft[idx.id1].ref = 0;
+ hw_db_inline_update_active_filters(ndev, db, idx.id2);
memset(&flm_rcp->ft[idx.id1], 0x0, sizeof(struct hw_db_inline_resource_db_flm_ft));
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 9820225ffa..33de674b72 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -131,6 +131,10 @@ struct hw_db_hsh_idx {
enum hw_db_idx_type {
HW_DB_IDX_TYPE_NONE = 0,
+
+ HW_DB_IDX_TYPE_MATCH_SET,
+ HW_DB_IDX_TYPE_ACTION_SET,
+
HW_DB_IDX_TYPE_COT,
HW_DB_IDX_TYPE_CAT,
HW_DB_IDX_TYPE_QSL,
@@ -145,6 +149,17 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_HSH,
};
+/* Container types */
+struct hw_db_inline_match_set_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_km_ft km_ft;
+ struct hw_db_action_set_idx action_set;
+ int jump;
+
+ uint8_t priority;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -224,6 +239,7 @@ struct hw_db_inline_action_set_data {
struct {
struct hw_db_cot_idx cot;
struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
struct hw_db_tpe_idx tpe;
struct hw_db_hsh_idx hsh;
};
@@ -262,6 +278,25 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
/**/
+
+struct hw_db_match_set_idx
+hw_db_inline_match_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_match_set_data *data);
+void hw_db_inline_match_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+void hw_db_inline_match_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_match_set_idx idx);
+
+struct hw_db_action_set_idx
+hw_db_inline_action_set_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_action_set_data *data);
+void hw_db_inline_action_set_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+void hw_db_inline_action_set_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_action_set_idx idx);
+
+/**/
+
struct hw_db_cot_idx hw_db_inline_cot_add(struct flow_nic_dev *ndev, void *db_handle,
const struct hw_db_inline_cot_data *data);
void hw_db_inline_cot_ref(struct flow_nic_dev *ndev, void *db_handle, struct hw_db_cot_idx idx);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 7487b5150e..193959dfc5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -2677,10 +2677,30 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup Action Set */
+ struct hw_db_inline_action_set_data action_set_data = {
+ .contains_jump = 0,
+ .cot = cot_idx,
+ .qsl = qsl_idx,
+ .slc_lr = slc_lr_idx,
+ .tpe = tpe_idx,
+ .hsh = hsh_idx,
+ };
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
+ local_idxs[(*local_idx_counter)++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 0,
.group = group,
+ .action_set = action_set_idx,
};
struct hw_db_flm_ft flm_ft_idx = empty_pattern
? hw_db_inline_flm_ft_default(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data)
@@ -2867,6 +2887,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
}
}
+ struct hw_db_action_set_idx action_set_idx =
+ hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &action_set_data);
+
+ fh->db_idxs[fh->db_idx_counter++] = action_set_idx.raw;
+
+ if (action_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Action Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup CAT */
struct hw_db_inline_cat_data cat_data = {
.vlan_mask = (0xf << fd->vlans) & 0xf,
@@ -2986,6 +3018,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
struct hw_db_inline_km_ft_data km_ft_data = {
.cat = cat_idx,
.km = km_idx,
+ .action_set = action_set_idx,
};
struct hw_db_km_ft km_ft_idx =
hw_db_inline_km_ft_add(dev->ndev, dev->ndev->hw_db_handle, &km_ft_data);
@@ -3022,10 +3055,32 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
km_write_data_match_entry(&fd->km, 0);
}
+ /* Setup Match Set */
+ struct hw_db_inline_match_set_data match_set_data = {
+ .cat = cat_idx,
+ .km = km_idx,
+ .km_ft = km_ft_idx,
+ .action_set = action_set_idx,
+ .jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .priority = attr->priority & 0xff,
+ };
+ struct hw_db_match_set_idx match_set_idx =
+ hw_db_inline_match_set_add(dev->ndev, dev->ndev->hw_db_handle,
+ &match_set_data);
+ fh->db_idxs[fh->db_idx_counter++] = match_set_idx.raw;
+
+ if (match_set_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference Match Set resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
/* Setup FLM FT */
struct hw_db_inline_flm_ft_data flm_ft_data = {
.is_group_zero = 1,
.jump = fd->jump_to_group != UINT32_MAX ? fd->jump_to_group : 0,
+ .action_set = action_set_idx,
+
};
struct hw_db_flm_ft flm_ft_idx =
hw_db_inline_flm_ft_add(dev->ndev, dev->ndev->hw_db_handle, &flm_ft_data);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 37/80] net/ntnic: add flow dump feature
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (35 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 36/80] net/ntnic: match and action db attributes were added Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 38/80] net/ntnic: add flow flush Serhii Iliushyk
` (42 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add posibilyty to dump flow in human readable format
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 17 ++
.../profile_inline/flow_api_hw_db_inline.c | 264 ++++++++++++++++++
.../profile_inline/flow_api_hw_db_inline.h | 3 +
.../profile_inline/flow_api_profile_inline.c | 81 ++++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 29 ++
drivers/net/ntnic/ntnic_mod_reg.h | 11 +
8 files changed, 413 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index e52363f04e..155a9e1fd6 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -281,6 +281,8 @@ struct flow_handle {
struct flow_handle *next;
struct flow_handle *prev;
+ /* Flow specific pointer to application data stored during action creation. */
+ void *context;
void *user_data;
union {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 577b1c83b5..ec91d08e27 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -972,6 +972,22 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
return 0;
}
+static int flow_dev_dump(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, FILTER, "%s: profile_inline module uninitialized", __func__);
+ return -1;
+ }
+
+ return profile_inline_ops->flow_dev_dump_profile_inline(dev, flow, caller_id, file, error);
+}
+
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf)
{
@@ -997,6 +1013,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_dev_dump = flow_dev_dump,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 52f85b65af..b5fee67e67 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -372,6 +372,270 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ char str_buffer[4096];
+ uint16_t rss_buffer_len = sizeof(str_buffer);
+
+ for (uint32_t i = 0; i < size; ++i) {
+ switch (idxs[i].type) {
+ case HW_DB_IDX_TYPE_NONE:
+ break;
+
+ case HW_DB_IDX_TYPE_MATCH_SET: {
+ const struct hw_db_inline_match_set_data *data =
+ &db->match_set[idxs[i].ids].data;
+ fprintf(file, " MATCH_SET %d, priority %d\n", idxs[i].ids,
+ (int)data->priority);
+ fprintf(file, " CAT id %d, KM id %d, KM_FT id %d, ACTION_SET id %d\n",
+ data->cat.ids, data->km.id1, data->km_ft.id1,
+ data->action_set.ids);
+
+ if (data->jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_ACTION_SET: {
+ const struct hw_db_inline_action_set_data *data =
+ &db->action_set[idxs[i].ids].data;
+ fprintf(file, " ACTION_SET %d\n", idxs[i].ids);
+
+ if (data->contains_jump)
+ fprintf(file, " Jumps to %d\n", data->jump);
+
+ else
+ fprintf(file,
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ data->cot.ids, data->qsl.ids, data->slc_lr.ids,
+ data->tpe.ids, data->hsh.ids);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_CAT: {
+ const struct hw_db_inline_cat_data *data = &db->cat[idxs[i].ids].data;
+ fprintf(file, " CAT %d\n", idxs[i].ids);
+ fprintf(file, " Port msk 0x%02x, VLAN msk 0x%02x\n",
+ (int)data->mac_port_mask, (int)data->vlan_mask);
+ fprintf(file,
+ " Proto msks: Frag 0x%02x, l2 0x%02x, l3 0x%02x, l4 0x%02x, l3t 0x%02x, l4t 0x%02x\n",
+ (int)data->ptc_mask_frag, (int)data->ptc_mask_l2,
+ (int)data->ptc_mask_l3, (int)data->ptc_mask_l4,
+ (int)data->ptc_mask_l3_tunnel, (int)data->ptc_mask_l4_tunnel);
+ fprintf(file, " IP protocol: pn %u pnt %u\n", data->ip_prot,
+ data->ip_prot_tunnel);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_QSL: {
+ const struct hw_db_inline_qsl_data *data = &db->qsl[idxs[i].ids].data;
+ fprintf(file, " QSL %d\n", idxs[i].ids);
+
+ if (data->discard) {
+ fprintf(file, " Discard\n");
+ break;
+ }
+
+ if (data->drop) {
+ fprintf(file, " Drop\n");
+ break;
+ }
+
+ fprintf(file, " Table size %d\n", data->table_size);
+
+ for (uint32_t i = 0;
+ i < data->table_size && i < HW_DB_INLINE_MAX_QST_PER_QSL; ++i) {
+ fprintf(file, " %u: Queue %d, TX port %d\n", i,
+ (data->table[i].queue_en ? (int)data->table[i].queue : -1),
+ (data->table[i].tx_port_en ? (int)data->table[i].tx_port
+ : -1));
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_COT: {
+ const struct hw_db_inline_cot_data *data = &db->cot[idxs[i].ids].data;
+ fprintf(file, " COT %d\n", idxs[i].ids);
+ fprintf(file, " Color contrib %d, frag rcp %d\n",
+ (int)data->matcher_color_contrib, (int)data->frag_rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_SLC_LR: {
+ const struct hw_db_inline_slc_lr_data *data =
+ &db->slc_lr[idxs[i].ids].data;
+ fprintf(file, " SLC_LR %d\n", idxs[i].ids);
+ fprintf(file, " Enable %u, dyn %u, ofs %u\n", data->head_slice_en,
+ data->head_slice_dyn, data->head_slice_ofs);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE: {
+ const struct hw_db_inline_tpe_data *data = &db->tpe[idxs[i].ids].data;
+ fprintf(file, " TPE %d\n", idxs[i].ids);
+ fprintf(file, " Insert len %u, new outer %u, calc eth %u\n",
+ data->insert_len, data->new_outer,
+ data->calc_eth_type_from_inner_ip);
+ fprintf(file, " TTL enable %u, dyn %u, ofs %u\n", data->ttl_en,
+ data->ttl_dyn, data->ttl_ofs);
+ fprintf(file,
+ " Len A enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_a_en, data->len_a_pos_dyn, data->len_a_pos_ofs,
+ data->len_a_add_dyn, data->len_a_add_ofs, data->len_a_sub_dyn);
+ fprintf(file,
+ " Len B enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_b_en, data->len_b_pos_dyn, data->len_b_pos_ofs,
+ data->len_b_add_dyn, data->len_b_add_ofs, data->len_b_sub_dyn);
+ fprintf(file,
+ " Len C enable %u, pos dyn %u, pos ofs %u, add dyn %u, add ofs %u, sub dyn %u\n",
+ data->len_c_en, data->len_c_pos_dyn, data->len_c_pos_ofs,
+ data->len_c_add_dyn, data->len_c_add_ofs, data->len_c_sub_dyn);
+
+ for (uint32_t i = 0; i < 6; ++i)
+ if (data->writer[i].en)
+ fprintf(file,
+ " Writer %i: Reader %u, dyn %u, ofs %u, len %u\n",
+ i, data->writer[i].reader_select,
+ data->writer[i].dyn, data->writer[i].ofs,
+ data->writer[i].len);
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_TPE_EXT: {
+ const struct hw_db_inline_tpe_ext_data *data =
+ &db->tpe_ext[idxs[i].ids].data;
+ const int rpl_rpl_length = ((int)data->size + 15) / 16;
+ fprintf(file, " TPE_EXT %d\n", idxs[i].ids);
+ fprintf(file, " Encap data, size %u\n", data->size);
+
+ for (int i = 0; i < rpl_rpl_length; ++i) {
+ fprintf(file, " ");
+
+ for (int n = 15; n >= 0; --n)
+ fprintf(file, " %02x%s", data->hdr8[i * 16 + n],
+ n == 8 ? " " : "");
+
+ fprintf(file, "\n");
+ }
+
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_RCP: {
+ const struct hw_db_inline_flm_rcp_data *data = &db->flm[idxs[i].id1].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " QW0 dyn %u, ofs %u, QW4 dyn %u, ofs %u\n",
+ data->qw0_dyn, data->qw0_ofs, data->qw4_dyn, data->qw4_ofs);
+ fprintf(file, " SW8 dyn %u, ofs %u, SW9 dyn %u, ofs %u\n",
+ data->sw8_dyn, data->sw8_ofs, data->sw9_dyn, data->sw9_ofs);
+ fprintf(file, " Outer prot %u, inner prot %u\n", data->outer_prot,
+ data->inner_prot);
+ fprintf(file, " Mask:\n");
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[0],
+ data->mask[1], data->mask[2], data->mask[3], data->mask[4]);
+ fprintf(file, " %08x %08x %08x %08x %08x\n", data->mask[5],
+ data->mask[6], data->mask[7], data->mask[8], data->mask[9]);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_FLM_FT: {
+ const struct hw_db_inline_flm_ft_data *data =
+ &db->flm[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " FLM_FT %d\n", idxs[i].id1);
+
+ if (data->is_group_zero)
+ fprintf(file, " Jump to %d\n", data->jump);
+
+ else
+ fprintf(file, " Group %d\n", data->group);
+
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_RCP: {
+ const struct hw_db_inline_km_rcp_data *data = &db->km[idxs[i].id1].data;
+ fprintf(file, " KM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " HW id %u\n", data->rcp);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_KM_FT: {
+ const struct hw_db_inline_km_ft_data *data =
+ &db->km[idxs[i].id2].ft[idxs[i].id1].data;
+ fprintf(file, " KM_FT %d\n", idxs[i].id1);
+ fprintf(file, " ACTION_SET id %d\n", data->action_set.ids);
+ fprintf(file, " KM_RCP id %d\n", data->km.ids);
+ fprintf(file, " CAT id %d\n", data->cat.ids);
+ break;
+ }
+
+ case HW_DB_IDX_TYPE_HSH: {
+ const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
+ fprintf(file, " HSH %d\n", idxs[i].ids);
+
+ switch (data->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ fprintf(file, " Func: NTH10\n");
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ fprintf(file, " Func: Toeplitz\n");
+ fprintf(file, " Key:");
+
+ for (uint8_t i = 0; i < MAX_RSS_KEY_LEN; i++) {
+ if (i % 10 == 0)
+ fprintf(file, "\n ");
+
+ fprintf(file, " %02x", data->key[i]);
+ }
+
+ fprintf(file, "\n");
+ break;
+
+ default:
+ fprintf(file, " Func: %u\n", data->func);
+ }
+
+ fprintf(file, " Hash mask hex:\n");
+ fprintf(file, " %016lx\n", data->hash_mask);
+
+ /* convert hash mask to human readable RTE_ETH_RSS_* form if possible */
+ if (sprint_nt_rss_mask(str_buffer, rss_buffer_len, "\n ",
+ data->hash_mask) == 0) {
+ fprintf(file, " Hash mask flags:%s\n", str_buffer);
+ }
+
+ break;
+ }
+
+ default: {
+ fprintf(file, " Unknown item. Type %u\n", idxs[i].type);
+ break;
+ }
+ }
+ }
+}
+
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file)
+{
+ (void)ndev;
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ fprintf(file, "CFN status:\n");
+
+ for (uint32_t id = 0; id < db->nb_cat; ++id)
+ if (db->cfn[id].cfn_hw)
+ fprintf(file, " ID %d, HW id %d, priority 0x%" PRIx64 "\n", (int)id,
+ db->cfn[id].cfn_hw, db->cfn[id].priority);
+}
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index 33de674b72..a9d31c86ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -276,6 +276,9 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
+ uint32_t size, FILE *file);
+void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
/**/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 193959dfc5..2d3df62cda 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4296,6 +4296,86 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev, int hsh_idx,
return res;
}
+static void dump_flm_data(const uint32_t *data, FILE *file)
+{
+ for (unsigned int i = 0; i < 10; ++i) {
+ fprintf(file, "%s%02X %02X %02X %02X%s", i % 2 ? "" : " ",
+ (data[i] >> 24) & 0xff, (data[i] >> 16) & 0xff, (data[i] >> 8) & 0xff,
+ data[i] & 0xff, i % 2 ? "\n" : " ");
+ }
+}
+
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ if (flow != NULL) {
+ if (flow->type == FLOW_HANDLE_TYPE_FLM) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+
+ } else {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n", (int)dev->port_id,
+ (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs, flow->db_idx_counter,
+ file);
+ }
+
+ } else {
+ int max_flm_count = 1000;
+
+ hw_db_inline_dump_cfn(dev->ndev, dev->ndev->hw_db_handle, file);
+
+ flow = dev->ndev->flow_base;
+
+ while (flow) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLOW\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->db_idxs,
+ flow->db_idx_counter, file);
+ }
+
+ flow = flow->next;
+ }
+
+ flow = dev->ndev->flow_base_flm;
+
+ while (flow && max_flm_count >= 0) {
+ if (flow->caller_id == caller_id) {
+ fprintf(file, "Port %d, caller %d, flow type FLM\n",
+ (int)dev->port_id, (int)flow->caller_id);
+ fprintf(file, " FLM_DATA:\n");
+ dump_flm_data(flow->flm_data, file);
+ hw_db_inline_dump(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter, file);
+ fprintf(file, " Context: %p\n", flow->context);
+ max_flm_count -= 1;
+ }
+
+ flow = flow->next;
+ }
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
static const struct profile_inline_ops ops = {
/*
@@ -4304,6 +4384,7 @@ static const struct profile_inline_ops ops = {
.done_flow_management_of_ndev_profile_inline = done_flow_management_of_ndev_profile_inline,
.initialize_flow_management_of_ndev_profile_inline =
initialize_flow_management_of_ndev_profile_inline,
+ .flow_dev_dump_profile_inline = flow_dev_dump_profile_inline,
/*
* Flow functionality
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e623bb2352..2c76a2c023 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,12 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 20b5cb2835..67a24a00f1 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -582,9 +582,38 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ FILE *file,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, NTNIC, "%s: flow_filter module uninitialized", __func__);
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_dev_dump(internals->flw_dev,
+ is_flow_handle_typecast(flow) ? (void *)flow
+ : flow->flw_hdl,
+ caller_id, file, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .dev_dump = eth_flow_dev_dump,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 27d6cbef01..cef655c5e0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,12 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -284,6 +290,11 @@ struct flow_filter_ops {
int *rss_target_id,
enum flow_eth_dev_profile flow_profile,
uint32_t exception_path);
+ int (*flow_dev_dump)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ uint16_t caller_id,
+ FILE *file,
+ struct rte_flow_error *error);
/*
* NT Flow API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 38/80] net/ntnic: add flow flush
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (36 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 37/80] net/ntnic: add flow dump feature Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 39/80] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
` (41 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Implements flow flush support
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 13 ++++++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 4 ++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 38 ++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +++
5 files changed, 105 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index ec91d08e27..fc9c68ed1a 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -251,6 +251,18 @@ static int flow_destroy(struct flow_eth_dev *dev __rte_unused,
return profile_inline_ops->flow_destroy_profile_inline(dev, flow, error);
}
+static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
+}
+
/*
* Device Management API
*/
@@ -1013,6 +1025,7 @@ static const struct flow_filter_ops ops = {
*/
.flow_create = flow_create,
.flow_destroy = flow_destroy,
+ .flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
};
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 2d3df62cda..0232954bec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3631,6 +3631,48 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
return err;
}
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error)
+{
+ int err = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ /*
+ * Delete all created FLM flows from this eth device.
+ * FLM flows must be deleted first because normal flows are their parents.
+ */
+ struct flow_handle *flow = dev->ndev->flow_base_flm;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ /* Delete all created flows from this eth device */
+ flow = dev->ndev->flow_base;
+
+ while (flow && !err) {
+ if (flow->dev == dev && flow->caller_id == caller_id) {
+ struct flow_handle *flow_next = flow->next;
+ err = flow_destroy_profile_inline(dev, flow, error);
+ flow = flow_next;
+
+ } else {
+ flow = flow->next;
+ }
+ }
+
+ return err;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -4391,6 +4433,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_locked_profile_inline = flow_destroy_locked_profile_inline,
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
+ .flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
/*
* NT Flow FLM Meter API
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 2c76a2c023..c695842077 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -38,6 +38,10 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+int flow_flush_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 67a24a00f1..93d89d59f3 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -582,6 +582,43 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return flow;
}
+static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
+ int res = 0;
+ /* Main application caller_id is port_id shifted above VDPA ports */
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (internals->flw_dev) {
+ res = flow_filter_ops->flow_flush(internals->flw_dev, caller_id, &flow_error);
+ rte_spinlock_lock(&flow_lock);
+
+ for (int flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used && nt_flows[flow].caller_id == caller_id) {
+ /* Cleanup recorded flows */
+ nt_flows[flow].used = 0;
+ nt_flows[flow].caller_id = 0;
+ }
+ }
+
+ rte_spinlock_unlock(&flow_lock);
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -613,6 +650,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
+ .flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index cef655c5e0..12baa13800 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -253,6 +253,10 @@ struct profile_inline_ops {
struct flow_handle *flow,
struct rte_flow_error *error);
+ int (*flow_flush_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -309,6 +313,9 @@ struct flow_filter_ops {
int (*flow_destroy)(struct flow_eth_dev *dev,
struct flow_handle *flow,
struct rte_flow_error *error);
+
+ int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 39/80] net/ntnic: add GMF (Generic MAC Feeder) module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (37 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 38/80] net/ntnic: add flow flush Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 40/80] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
` (40 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
The Generic MAC Feeder module provides a way to feed data
to the MAC modules directly from the FPGA,
rather than from host or physical ports.
The use case for this is as a test tool and is not used by NTNIC.
This module is requireqd for correct initialization
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
.../link_mgmt/link_100g/nt4ga_link_100g.c | 8 ++
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_core.h | 1 +
.../net/ntnic/nthw/core/include/nthw_gmf.h | 64 +++++++++
drivers/net/ntnic/nthw/core/nthw_gmf.c | 133 ++++++++++++++++++
5 files changed, 207 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_gmf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_gmf.c
diff --git a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
index 8964458b47..d8e0cad7cd 100644
--- a/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
+++ b/drivers/net/ntnic/link_mgmt/link_100g/nt4ga_link_100g.c
@@ -404,6 +404,14 @@ static int _port_init(adapter_info_t *drv, nthw_fpga_t *fpga, int port)
_enable_tx(drv, mac_pcs);
_reset_rx(drv, mac_pcs);
+ /* 2.2) Nt4gaPort::setup() */
+ if (nthw_gmf_init(NULL, fpga, port) == 0) {
+ nthw_gmf_t gmf;
+
+ if (nthw_gmf_init(&gmf, fpga, port) == 0)
+ nthw_gmf_set_enable(&gmf, true);
+ }
+
/* Phase 3. Link state machine steps */
/* 3.1) Create NIM, ::createNim() */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index d7e6d05556..92167d24e4 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -38,6 +38,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst9563.c',
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
+ 'nthw/core/nthw_gmf.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_core.h b/drivers/net/ntnic/nthw/core/include/nthw_core.h
index fe32891712..4073f9632c 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_core.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_core.h
@@ -17,6 +17,7 @@
#include "nthw_iic.h"
#include "nthw_i2cm.h"
+#include "nthw_gmf.h"
#include "nthw_gpio_phy.h"
#include "nthw_mac_pcs.h"
#include "nthw_sdc.h"
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_gmf.h b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
new file mode 100644
index 0000000000..cc5be85154
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_gmf.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_GMF_H__
+#define __NTHW_GMF_H__
+
+struct nthw_gmf {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_gmf;
+ int mn_instance;
+
+ nthw_register_t *mp_ctrl;
+ nthw_field_t *mp_ctrl_enable;
+ nthw_field_t *mp_ctrl_ifg_enable;
+ nthw_field_t *mp_ctrl_ifg_tx_now_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_always;
+ nthw_field_t *mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock;
+ nthw_field_t *mp_ctrl_ifg_auto_adjust_enable;
+ nthw_field_t *mp_ctrl_ts_inject_always;
+ nthw_field_t *mp_ctrl_fcs_always;
+
+ nthw_register_t *mp_speed;
+ nthw_field_t *mp_speed_ifg_speed;
+
+ nthw_register_t *mp_ifg_clock_delta;
+ nthw_field_t *mp_ifg_clock_delta_delta;
+
+ nthw_register_t *mp_ifg_clock_delta_adjust;
+ nthw_field_t *mp_ifg_clock_delta_adjust_delta;
+
+ nthw_register_t *mp_ifg_max_adjust_slack;
+ nthw_field_t *mp_ifg_max_adjust_slack_slack;
+
+ nthw_register_t *mp_debug_lane_marker;
+ nthw_field_t *mp_debug_lane_marker_compensation;
+
+ nthw_register_t *mp_stat_sticky;
+ nthw_field_t *mp_stat_sticky_data_underflowed;
+ nthw_field_t *mp_stat_sticky_ifg_adjusted;
+
+ nthw_register_t *mp_stat_next_pkt;
+ nthw_field_t *mp_stat_next_pkt_ns;
+
+ nthw_register_t *mp_stat_max_delayed_pkt;
+ nthw_field_t *mp_stat_max_delayed_pkt_ns;
+
+ nthw_register_t *mp_ts_inject;
+ nthw_field_t *mp_ts_inject_offset;
+ nthw_field_t *mp_ts_inject_pos;
+ int mn_param_gmf_ifg_speed_mul;
+ int mn_param_gmf_ifg_speed_div;
+
+ bool m_administrative_block; /* Used to enforce license expiry */
+};
+
+typedef struct nthw_gmf nthw_gmf_t;
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable);
+
+#endif /* __NTHW_GMF_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_gmf.c b/drivers/net/ntnic/nthw/core/nthw_gmf.c
new file mode 100644
index 0000000000..16a4c288bd
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_gmf.c
@@ -0,0 +1,133 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <limits.h>
+#include <math.h>
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_gmf.h"
+
+int nthw_gmf_init(nthw_gmf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_GMF, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: GMF %d: no such instance",
+ p_fpga->p_fpga_info->mp_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_gmf = mod;
+
+ p->mp_ctrl = nthw_module_get_register(p->mp_mod_gmf, GMF_CTRL);
+ p->mp_ctrl_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_ENABLE);
+ p->mp_ctrl_ifg_enable = nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_ENABLE);
+ p->mp_ctrl_ifg_auto_adjust_enable =
+ nthw_register_get_field(p->mp_ctrl, GMF_CTRL_IFG_AUTO_ADJUST_ENABLE);
+ p->mp_ctrl_ts_inject_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_TS_INJECT_ALWAYS);
+ p->mp_ctrl_fcs_always = nthw_register_query_field(p->mp_ctrl, GMF_CTRL_FCS_ALWAYS);
+
+ p->mp_speed = nthw_module_get_register(p->mp_mod_gmf, GMF_SPEED);
+ p->mp_speed_ifg_speed = nthw_register_get_field(p->mp_speed, GMF_SPEED_IFG_SPEED);
+
+ p->mp_ifg_clock_delta = nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA);
+ p->mp_ifg_clock_delta_delta =
+ nthw_register_get_field(p->mp_ifg_clock_delta, GMF_IFG_SET_CLOCK_DELTA_DELTA);
+
+ p->mp_ifg_max_adjust_slack =
+ nthw_module_get_register(p->mp_mod_gmf, GMF_IFG_MAX_ADJUST_SLACK);
+ p->mp_ifg_max_adjust_slack_slack = nthw_register_get_field(p->mp_ifg_max_adjust_slack,
+ GMF_IFG_MAX_ADJUST_SLACK_SLACK);
+
+ p->mp_debug_lane_marker = nthw_module_get_register(p->mp_mod_gmf, GMF_DEBUG_LANE_MARKER);
+ p->mp_debug_lane_marker_compensation =
+ nthw_register_get_field(p->mp_debug_lane_marker,
+ GMF_DEBUG_LANE_MARKER_COMPENSATION);
+
+ p->mp_stat_sticky = nthw_module_get_register(p->mp_mod_gmf, GMF_STAT_STICKY);
+ p->mp_stat_sticky_data_underflowed =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_DATA_UNDERFLOWED);
+ p->mp_stat_sticky_ifg_adjusted =
+ nthw_register_get_field(p->mp_stat_sticky, GMF_STAT_STICKY_IFG_ADJUSTED);
+
+ p->mn_param_gmf_ifg_speed_mul =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_MUL, 1);
+ p->mn_param_gmf_ifg_speed_div =
+ nthw_fpga_get_product_param(p_fpga, NT_GMF_IFG_SPEED_DIV, 1);
+
+ p->m_administrative_block = false;
+
+ p->mp_stat_next_pkt = nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_NEXT_PKT);
+
+ if (p->mp_stat_next_pkt) {
+ p->mp_stat_next_pkt_ns =
+ nthw_register_query_field(p->mp_stat_next_pkt, GMF_STAT_NEXT_PKT_NS);
+
+ } else {
+ p->mp_stat_next_pkt_ns = NULL;
+ }
+
+ p->mp_stat_max_delayed_pkt =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_STAT_MAX_DELAYED_PKT);
+
+ if (p->mp_stat_max_delayed_pkt) {
+ p->mp_stat_max_delayed_pkt_ns =
+ nthw_register_query_field(p->mp_stat_max_delayed_pkt,
+ GMF_STAT_MAX_DELAYED_PKT_NS);
+
+ } else {
+ p->mp_stat_max_delayed_pkt_ns = NULL;
+ }
+
+ p->mp_ctrl_ifg_tx_now_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_NOW_ALWAYS);
+ p->mp_ctrl_ifg_tx_on_ts_always =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ALWAYS);
+
+ p->mp_ctrl_ifg_tx_on_ts_adjust_on_set_clock =
+ nthw_register_query_field(p->mp_ctrl, GMF_CTRL_IFG_TX_ON_TS_ADJUST_ON_SET_CLOCK);
+
+ p->mp_ifg_clock_delta_adjust =
+ nthw_module_query_register(p->mp_mod_gmf, GMF_IFG_SET_CLOCK_DELTA_ADJUST);
+
+ if (p->mp_ifg_clock_delta_adjust) {
+ p->mp_ifg_clock_delta_adjust_delta =
+ nthw_register_query_field(p->mp_ifg_clock_delta_adjust,
+ GMF_IFG_SET_CLOCK_DELTA_ADJUST_DELTA);
+
+ } else {
+ p->mp_ifg_clock_delta_adjust_delta = NULL;
+ }
+
+ p->mp_ts_inject = nthw_module_query_register(p->mp_mod_gmf, GMF_TS_INJECT);
+
+ if (p->mp_ts_inject) {
+ p->mp_ts_inject_offset =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_OFFSET);
+ p->mp_ts_inject_pos =
+ nthw_register_query_field(p->mp_ts_inject, GMF_TS_INJECT_POS);
+
+ } else {
+ p->mp_ts_inject_offset = NULL;
+ p->mp_ts_inject_pos = NULL;
+ }
+
+ return 0;
+}
+
+void nthw_gmf_set_enable(nthw_gmf_t *p, bool enable)
+{
+ if (!p->m_administrative_block)
+ nthw_field_set_val_flush32(p->mp_ctrl_enable, enable ? 1 : 0);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 40/80] net/ntnic: sort FPGA registers alphanumerically
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (38 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 39/80] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 41/80] net/ntnic: add CSU module registers Serhii Iliushyk
` (39 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Beatification commit. It is required for pretty supporting different FPGA
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 364 +++++++++---------
1 file changed, 182 insertions(+), 182 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 6df7208649..e076697a92 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,187 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
+ { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
+ { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
+ { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
+ { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
+ { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
+ { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
+ { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
+ { DBS_RX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
+ { DBS_RX_INIT_BUSY, 1, 8, 0 },
+ { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
+ { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
+ { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
+ { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
+ { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
+ { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
+ { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
+ { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
+ { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
+ { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
+ { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
+ { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
+ { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
+ { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
+ { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
+ { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
+ { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
+ { DBS_TX_IDLE_BUSY, 1, 8, 0 },
+ { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
+ { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
+ { DBS_TX_INIT_BUSY, 1, 8, 0 },
+ { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
+ { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
+ { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
+ { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
+ { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
+ { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
+ { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
+ { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
+ { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
+ { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
+ { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
+ { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
+ { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
+ { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
+ { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
+ { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
+ { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
+ { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
+ { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
+ { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
+ { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
+ { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
+};
+
+static nthw_fpga_register_init_s dbs_registers[] = {
+ { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
+ { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
+ { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
+ { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
+ { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
+ { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
+ { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
+ { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
+ { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
+ { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
+ { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
+ { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
+ { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
+ { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
+ { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
+ { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
+ { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
+ { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
+ { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
+ { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
+ { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
+ { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
+ { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
+ { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
+ { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
+ { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
+ { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1541,192 +1722,11 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
-static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
- { DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_am_data_fields[] = {
- { DBS_RX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_RX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_RX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_RX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_control_fields[] = {
- { DBS_RX_CONTROL_AME, 1, 7, 0 }, { DBS_RX_CONTROL_AMS, 4, 8, 8 },
- { DBS_RX_CONTROL_LQ, 7, 0, 0 }, { DBS_RX_CONTROL_QE, 1, 17, 0 },
- { DBS_RX_CONTROL_UWE, 1, 12, 0 }, { DBS_RX_CONTROL_UWS, 4, 13, 5 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_ctrl_fields[] = {
- { DBS_RX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_dr_data_fields[] = {
- { DBS_RX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_RX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_RX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_RX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_idle_fields[] = {
- { DBS_RX_IDLE_BUSY, 1, 8, 0 },
- { DBS_RX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_RX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_fields[] = {
- { DBS_RX_INIT_BUSY, 1, 8, 0 },
- { DBS_RX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_RX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_init_val_fields[] = {
- { DBS_RX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_RX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_ptr_fields[] = {
- { DBS_RX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_RX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_RX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_ctrl_fields[] = {
- { DBS_RX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_RX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_rx_uw_data_fields[] = {
- { DBS_RX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_RX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_RX_UW_DATA_INT, 1, 88, 0x0000 }, { DBS_RX_UW_DATA_ISTK, 1, 92, 0x0000 },
- { DBS_RX_UW_DATA_PCKED, 1, 87, 0x0000 }, { DBS_RX_UW_DATA_QS, 15, 72, 0x0000 },
- { DBS_RX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_ctrl_fields[] = {
- { DBS_TX_AM_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_AM_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_am_data_fields[] = {
- { DBS_TX_AM_DATA_ENABLE, 1, 72, 0x0000 }, { DBS_TX_AM_DATA_GPA, 64, 0, 0x0000 },
- { DBS_TX_AM_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_AM_DATA_INT, 1, 74, 0x0000 },
- { DBS_TX_AM_DATA_PCKED, 1, 73, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_control_fields[] = {
- { DBS_TX_CONTROL_AME, 1, 7, 0 }, { DBS_TX_CONTROL_AMS, 4, 8, 5 },
- { DBS_TX_CONTROL_LQ, 7, 0, 0 }, { DBS_TX_CONTROL_QE, 1, 17, 0 },
- { DBS_TX_CONTROL_UWE, 1, 12, 0 }, { DBS_TX_CONTROL_UWS, 4, 13, 8 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_ctrl_fields[] = {
- { DBS_TX_DR_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_DR_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_dr_data_fields[] = {
- { DBS_TX_DR_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_DR_DATA_HDR, 1, 88, 0x0000 },
- { DBS_TX_DR_DATA_HID, 8, 64, 0x0000 }, { DBS_TX_DR_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_DR_DATA_PORT, 1, 89, 0x0000 }, { DBS_TX_DR_DATA_QS, 15, 72, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_idle_fields[] = {
- { DBS_TX_IDLE_BUSY, 1, 8, 0 },
- { DBS_TX_IDLE_IDLE, 1, 0, 0x0000 },
- { DBS_TX_IDLE_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_fields[] = {
- { DBS_TX_INIT_BUSY, 1, 8, 0 },
- { DBS_TX_INIT_INIT, 1, 0, 0x0000 },
- { DBS_TX_INIT_QUEUE, 7, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_init_val_fields[] = {
- { DBS_TX_INIT_VAL_IDX, 16, 0, 0x0000 },
- { DBS_TX_INIT_VAL_PTR, 15, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_ptr_fields[] = {
- { DBS_TX_PTR_PTR, 16, 0, 0x0000 },
- { DBS_TX_PTR_QUEUE, 7, 16, 0x0000 },
- { DBS_TX_PTR_VALID, 1, 23, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_ctrl_fields[] = {
- { DBS_TX_QOS_CTRL_ADR, 1, 0, 0x0000 },
- { DBS_TX_QOS_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_data_fields[] = {
- { DBS_TX_QOS_DATA_BS, 27, 17, 0x0000 },
- { DBS_TX_QOS_DATA_EN, 1, 0, 0x0000 },
- { DBS_TX_QOS_DATA_IR, 16, 1, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qos_rate_fields[] = {
- { DBS_TX_QOS_RATE_DIV, 19, 16, 2 },
- { DBS_TX_QOS_RATE_MUL, 16, 0, 1 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_ctrl_fields[] = {
- { DBS_TX_QP_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_QP_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_qp_data_fields[] = {
- { DBS_TX_QP_DATA_VPORT, 1, 0, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_ctrl_fields[] = {
- { DBS_TX_UW_CTRL_ADR, 7, 0, 0x0000 },
- { DBS_TX_UW_CTRL_CNT, 16, 16, 0x0000 },
-};
-
-static nthw_fpga_field_init_s dbs_tx_uw_data_fields[] = {
- { DBS_TX_UW_DATA_GPA, 64, 0, 0x0000 }, { DBS_TX_UW_DATA_HID, 8, 64, 0x0000 },
- { DBS_TX_UW_DATA_INO, 1, 93, 0x0000 }, { DBS_TX_UW_DATA_INT, 1, 88, 0x0000 },
- { DBS_TX_UW_DATA_ISTK, 1, 92, 0x0000 }, { DBS_TX_UW_DATA_PCKED, 1, 87, 0x0000 },
- { DBS_TX_UW_DATA_QS, 15, 72, 0x0000 }, { DBS_TX_UW_DATA_VEC, 3, 89, 0x0000 },
-};
-
-static nthw_fpga_register_init_s dbs_registers[] = {
- { DBS_RX_AM_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_am_ctrl_fields },
- { DBS_RX_AM_DATA, 11, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_am_data_fields },
- { DBS_RX_CONTROL, 0, 18, NTHW_FPGA_REG_TYPE_RW, 43008, 6, dbs_rx_control_fields },
- { DBS_RX_DR_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_dr_ctrl_fields },
- { DBS_RX_DR_DATA, 19, 89, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_rx_dr_data_fields },
- { DBS_RX_IDLE, 8, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_idle_fields },
- { DBS_RX_INIT, 2, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_init_fields },
- { DBS_RX_INIT_VAL, 3, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_init_val_fields },
- { DBS_RX_PTR, 4, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_rx_ptr_fields },
- { DBS_RX_UW_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_rx_uw_ctrl_fields },
- { DBS_RX_UW_DATA, 15, 93, NTHW_FPGA_REG_TYPE_WO, 0, 7, dbs_rx_uw_data_fields },
- { DBS_TX_AM_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_am_ctrl_fields },
- { DBS_TX_AM_DATA, 13, 75, NTHW_FPGA_REG_TYPE_WO, 0, 5, dbs_tx_am_data_fields },
- { DBS_TX_CONTROL, 1, 18, NTHW_FPGA_REG_TYPE_RW, 66816, 6, dbs_tx_control_fields },
- { DBS_TX_DR_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_dr_ctrl_fields },
- { DBS_TX_DR_DATA, 21, 90, NTHW_FPGA_REG_TYPE_WO, 0, 6, dbs_tx_dr_data_fields },
- { DBS_TX_IDLE, 9, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_idle_fields },
- { DBS_TX_INIT, 5, 9, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_init_fields },
- { DBS_TX_INIT_VAL, 6, 31, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_init_val_fields },
- { DBS_TX_PTR, 7, 24, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, dbs_tx_ptr_fields },
- { DBS_TX_QOS_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qos_ctrl_fields },
- { DBS_TX_QOS_DATA, 25, 44, NTHW_FPGA_REG_TYPE_WO, 0, 3, dbs_tx_qos_data_fields },
- { DBS_TX_QOS_RATE, 26, 35, NTHW_FPGA_REG_TYPE_RW, 131073, 2, dbs_tx_qos_rate_fields },
- { DBS_TX_QP_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_qp_ctrl_fields },
- { DBS_TX_QP_DATA, 23, 1, NTHW_FPGA_REG_TYPE_WO, 0, 1, dbs_tx_qp_data_fields },
- { DBS_TX_UW_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, dbs_tx_uw_ctrl_fields },
- { DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
-};
-
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
- { MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers},
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
{
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 41/80] net/ntnic: add CSU module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (39 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 40/80] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 42/80] net/ntnic: add FLM " Serhii Iliushyk
` (38 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Checksum Update module updates the checksums of packets
that has been modified in any way.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index e076697a92..efa7b306bc 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,23 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
+ { CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s csu_rcp_data_fields[] = {
+ { CSU_RCP_DATA_IL3_CMD, 2, 5, 0x0000 },
+ { CSU_RCP_DATA_IL4_CMD, 3, 7, 0x0000 },
+ { CSU_RCP_DATA_OL3_CMD, 2, 0, 0x0000 },
+ { CSU_RCP_DATA_OL4_CMD, 3, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s csu_registers[] = {
+ { CSU_RCP_CTRL, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, csu_rcp_ctrl_fields },
+ { CSU_RCP_DATA, 2, 10, NTHW_FPGA_REG_TYPE_WO, 0, 4, csu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s dbs_rx_am_ctrl_fields[] = {
{ DBS_RX_AM_CTRL_ADR, 7, 0, 0x0000 },
{ DBS_RX_AM_CTRL_CNT, 16, 16, 0x0000 },
@@ -1724,6 +1741,7 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
+ { MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
@@ -1919,5 +1937,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 22, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 42/80] net/ntnic: add FLM module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (40 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 41/80] net/ntnic: add CSU module registers Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 43/80] net/ntnic: add HFU " Serhii Iliushyk
` (37 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Flow Matcher module is a high-performance stateful SDRAM lookup and
programming engine which supported exact match lookup in line-rate
of up to hundreds of millions of flows.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 286 +++++++++++++++++-
1 file changed, 284 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efa7b306bc..739cabfb1c 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -468,6 +468,288 @@ static nthw_fpga_register_init_s dbs_registers[] = {
{ DBS_TX_UW_DATA, 17, 94, NTHW_FPGA_REG_TYPE_WO, 0, 8, dbs_tx_uw_data_fields },
};
+static nthw_fpga_field_init_s flm_buf_ctrl_fields[] = {
+ { FLM_BUF_CTRL_INF_AVAIL, 16, 16, 0x0000 },
+ { FLM_BUF_CTRL_LRN_FREE, 16, 0, 0x0000 },
+ { FLM_BUF_CTRL_STA_AVAIL, 16, 32, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_control_fields[] = {
+ { FLM_CONTROL_CALIB_RECALIBRATE, 3, 28, 0 },
+ { FLM_CONTROL_CRCRD, 1, 12, 0x0000 },
+ { FLM_CONTROL_CRCWR, 1, 11, 0x0000 },
+ { FLM_CONTROL_EAB, 5, 18, 0 },
+ { FLM_CONTROL_ENABLE, 1, 0, 0 },
+ { FLM_CONTROL_INIT, 1, 1, 0x0000 },
+ { FLM_CONTROL_LDS, 1, 2, 0x0000 },
+ { FLM_CONTROL_LFS, 1, 3, 0x0000 },
+ { FLM_CONTROL_LIS, 1, 4, 0x0000 },
+ { FLM_CONTROL_PDS, 1, 9, 0x0000 },
+ { FLM_CONTROL_PIS, 1, 10, 0x0000 },
+ { FLM_CONTROL_RBL, 4, 13, 0 },
+ { FLM_CONTROL_RDS, 1, 7, 0x0000 },
+ { FLM_CONTROL_RIS, 1, 8, 0x0000 },
+ { FLM_CONTROL_SPLIT_SDRAM_USAGE, 5, 23, 16 },
+ { FLM_CONTROL_UDS, 1, 5, 0x0000 },
+ { FLM_CONTROL_UIS, 1, 6, 0x0000 },
+ { FLM_CONTROL_WPD, 1, 17, 0 },
+};
+
+static nthw_fpga_field_init_s flm_inf_data_fields[] = {
+ { FLM_INF_DATA_BYTES, 64, 0, 0x0000 }, { FLM_INF_DATA_CAUSE, 3, 224, 0x0000 },
+ { FLM_INF_DATA_EOR, 1, 287, 0x0000 }, { FLM_INF_DATA_ID, 32, 192, 0x0000 },
+ { FLM_INF_DATA_PACKETS, 64, 64, 0x0000 }, { FLM_INF_DATA_TS, 64, 128, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_aps_fields[] = {
+ { FLM_LOAD_APS_APS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_bin_fields[] = {
+ { FLM_LOAD_BIN_BIN, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_load_lps_fields[] = {
+ { FLM_LOAD_LPS_LPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
+ { FLM_LRN_DATA_ADJ, 32, 480, 0x0000 }, { FLM_LRN_DATA_COLOR, 32, 448, 0x0000 },
+ { FLM_LRN_DATA_DSCP, 6, 698, 0x0000 }, { FLM_LRN_DATA_ENT, 1, 693, 0x0000 },
+ { FLM_LRN_DATA_EOR, 1, 767, 0x0000 }, { FLM_LRN_DATA_FILL, 16, 544, 0x0000 },
+ { FLM_LRN_DATA_FT, 4, 560, 0x0000 }, { FLM_LRN_DATA_FT_MBR, 4, 564, 0x0000 },
+ { FLM_LRN_DATA_FT_MISS, 4, 568, 0x0000 }, { FLM_LRN_DATA_ID, 32, 512, 0x0000 },
+ { FLM_LRN_DATA_KID, 8, 328, 0x0000 }, { FLM_LRN_DATA_MBR_ID1, 28, 572, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID2, 28, 600, 0x0000 }, { FLM_LRN_DATA_MBR_ID3, 28, 628, 0x0000 },
+ { FLM_LRN_DATA_MBR_ID4, 28, 656, 0x0000 }, { FLM_LRN_DATA_NAT_EN, 1, 711, 0x0000 },
+ { FLM_LRN_DATA_NAT_IP, 32, 336, 0x0000 }, { FLM_LRN_DATA_NAT_PORT, 16, 400, 0x0000 },
+ { FLM_LRN_DATA_NOFI, 1, 716, 0x0000 }, { FLM_LRN_DATA_OP, 4, 694, 0x0000 },
+ { FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
+ { FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
+ { FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
+ { FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
+ { FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_prio_fields[] = {
+ { FLM_PRIO_FT0, 4, 4, 1 }, { FLM_PRIO_FT1, 4, 12, 1 }, { FLM_PRIO_FT2, 4, 20, 1 },
+ { FLM_PRIO_FT3, 4, 28, 1 }, { FLM_PRIO_LIMIT0, 4, 0, 0 }, { FLM_PRIO_LIMIT1, 4, 8, 0 },
+ { FLM_PRIO_LIMIT2, 4, 16, 0 }, { FLM_PRIO_LIMIT3, 4, 24, 0 },
+};
+
+static nthw_fpga_field_init_s flm_pst_ctrl_fields[] = {
+ { FLM_PST_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_PST_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_pst_data_fields[] = {
+ { FLM_PST_DATA_BP, 5, 0, 0x0000 },
+ { FLM_PST_DATA_PP, 5, 5, 0x0000 },
+ { FLM_PST_DATA_TP, 5, 10, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_ctrl_fields[] = {
+ { FLM_RCP_CTRL_ADR, 5, 0, 0x0000 },
+ { FLM_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_rcp_data_fields[] = {
+ { FLM_RCP_DATA_AUTO_IPV4_MASK, 1, 402, 0x0000 },
+ { FLM_RCP_DATA_BYT_DYN, 5, 387, 0x0000 },
+ { FLM_RCP_DATA_BYT_OFS, 8, 392, 0x0000 },
+ { FLM_RCP_DATA_IPN, 1, 386, 0x0000 },
+ { FLM_RCP_DATA_KID, 8, 377, 0x0000 },
+ { FLM_RCP_DATA_LOOKUP, 1, 0, 0x0000 },
+ { FLM_RCP_DATA_MASK, 320, 57, 0x0000 },
+ { FLM_RCP_DATA_OPN, 1, 385, 0x0000 },
+ { FLM_RCP_DATA_QW0_DYN, 5, 1, 0x0000 },
+ { FLM_RCP_DATA_QW0_OFS, 8, 6, 0x0000 },
+ { FLM_RCP_DATA_QW0_SEL, 2, 14, 0x0000 },
+ { FLM_RCP_DATA_QW4_DYN, 5, 16, 0x0000 },
+ { FLM_RCP_DATA_QW4_OFS, 8, 21, 0x0000 },
+ { FLM_RCP_DATA_SW8_DYN, 5, 29, 0x0000 },
+ { FLM_RCP_DATA_SW8_OFS, 8, 34, 0x0000 },
+ { FLM_RCP_DATA_SW8_SEL, 2, 42, 0x0000 },
+ { FLM_RCP_DATA_SW9_DYN, 5, 44, 0x0000 },
+ { FLM_RCP_DATA_SW9_OFS, 8, 49, 0x0000 },
+ { FLM_RCP_DATA_TXPLM, 2, 400, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scan_fields[] = {
+ { FLM_SCAN_I, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s flm_status_fields[] = {
+ { FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
+ { FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
+ { FLM_STATUS_CALIB_SUCCESS, 3, 0, 0 },
+ { FLM_STATUS_CRCERR, 1, 10, 0x0000 },
+ { FLM_STATUS_CRITICAL, 1, 8, 0x0000 },
+ { FLM_STATUS_EFT_BP, 1, 11, 0x0000 },
+ { FLM_STATUS_IDLE, 1, 7, 0x0000 },
+ { FLM_STATUS_INITDONE, 1, 6, 0x0000 },
+ { FLM_STATUS_PANIC, 1, 9, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_done_fields[] = {
+ { FLM_STAT_AUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_fail_fields[] = {
+ { FLM_STAT_AUL_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_aul_ignore_fields[] = {
+ { FLM_STAT_AUL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_hit_fields[] = {
+ { FLM_STAT_CSH_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_miss_fields[] = {
+ { FLM_STAT_CSH_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_csh_unh_fields[] = {
+ { FLM_STAT_CSH_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_move_fields[] = {
+ { FLM_STAT_CUC_MOVE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_cuc_start_fields[] = {
+ { FLM_STAT_CUC_START_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_flows_fields[] = {
+ { FLM_STAT_FLOWS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_done_fields[] = {
+ { FLM_STAT_INF_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_inf_skip_fields[] = {
+ { FLM_STAT_INF_SKIP_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_done_fields[] = {
+ { FLM_STAT_LRN_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_fail_fields[] = {
+ { FLM_STAT_LRN_FAIL_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_lrn_ignore_fields[] = {
+ { FLM_STAT_LRN_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_dis_fields[] = {
+ { FLM_STAT_PCK_DIS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_hit_fields[] = {
+ { FLM_STAT_PCK_HIT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_miss_fields[] = {
+ { FLM_STAT_PCK_MISS_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_pck_unh_fields[] = {
+ { FLM_STAT_PCK_UNH_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_done_fields[] = {
+ { FLM_STAT_PRB_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_prb_ignore_fields[] = {
+ { FLM_STAT_PRB_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_done_fields[] = {
+ { FLM_STAT_REL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_rel_ignore_fields[] = {
+ { FLM_STAT_REL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_sta_done_fields[] = {
+ { FLM_STAT_STA_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_tul_done_fields[] = {
+ { FLM_STAT_TUL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_done_fields[] = {
+ { FLM_STAT_UNL_DONE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_stat_unl_ignore_fields[] = {
+ { FLM_STAT_UNL_IGNORE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_sta_data_fields[] = {
+ { FLM_STA_DATA_EOR, 1, 95, 0x0000 }, { FLM_STA_DATA_ID, 32, 0, 0x0000 },
+ { FLM_STA_DATA_LDS, 1, 32, 0x0000 }, { FLM_STA_DATA_LFS, 1, 33, 0x0000 },
+ { FLM_STA_DATA_LIS, 1, 34, 0x0000 }, { FLM_STA_DATA_PDS, 1, 39, 0x0000 },
+ { FLM_STA_DATA_PIS, 1, 40, 0x0000 }, { FLM_STA_DATA_RDS, 1, 37, 0x0000 },
+ { FLM_STA_DATA_RIS, 1, 38, 0x0000 }, { FLM_STA_DATA_UDS, 1, 35, 0x0000 },
+ { FLM_STA_DATA_UIS, 1, 36, 0x0000 },
+};
+
+static nthw_fpga_register_init_s flm_registers[] = {
+ { FLM_BUF_CTRL, 14, 48, NTHW_FPGA_REG_TYPE_RW, 0, 3, flm_buf_ctrl_fields },
+ { FLM_CONTROL, 0, 31, NTHW_FPGA_REG_TYPE_MIXED, 134217728, 18, flm_control_fields },
+ { FLM_INF_DATA, 16, 288, NTHW_FPGA_REG_TYPE_RO, 0, 6, flm_inf_data_fields },
+ { FLM_LOAD_APS, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_aps_fields },
+ { FLM_LOAD_BIN, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_load_bin_fields },
+ { FLM_LOAD_LPS, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_load_lps_fields },
+ { FLM_LRN_DATA, 15, 768, NTHW_FPGA_REG_TYPE_WO, 0, 34, flm_lrn_data_fields },
+ { FLM_PRIO, 6, 32, NTHW_FPGA_REG_TYPE_WO, 269488144, 8, flm_prio_fields },
+ { FLM_PST_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_pst_ctrl_fields },
+ { FLM_PST_DATA, 13, 15, NTHW_FPGA_REG_TYPE_WO, 0, 3, flm_pst_data_fields },
+ { FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
+ { FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
+ { FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
+ { FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
+ { FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
+ { FLM_STAT_AUL_IGNORE, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_ignore_fields },
+ { FLM_STAT_CSH_HIT, 52, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_hit_fields },
+ { FLM_STAT_CSH_MISS, 53, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_miss_fields },
+ { FLM_STAT_CSH_UNH, 54, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_csh_unh_fields },
+ { FLM_STAT_CUC_MOVE, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_move_fields },
+ { FLM_STAT_CUC_START, 55, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_cuc_start_fields },
+ { FLM_STAT_FLOWS, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_flows_fields },
+ { FLM_STAT_INF_DONE, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_done_fields },
+ { FLM_STAT_INF_SKIP, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_inf_skip_fields },
+ { FLM_STAT_LRN_DONE, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_done_fields },
+ { FLM_STAT_LRN_FAIL, 34, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_fail_fields },
+ { FLM_STAT_LRN_IGNORE, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_lrn_ignore_fields },
+ { FLM_STAT_PCK_DIS, 51, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_dis_fields },
+ { FLM_STAT_PCK_HIT, 48, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_hit_fields },
+ { FLM_STAT_PCK_MISS, 49, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_miss_fields },
+ { FLM_STAT_PCK_UNH, 50, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_pck_unh_fields },
+ { FLM_STAT_PRB_DONE, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_done_fields },
+ { FLM_STAT_PRB_IGNORE, 40, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_prb_ignore_fields },
+ { FLM_STAT_REL_DONE, 37, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_done_fields },
+ { FLM_STAT_REL_IGNORE, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_rel_ignore_fields },
+ { FLM_STAT_STA_DONE, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_sta_done_fields },
+ { FLM_STAT_TUL_DONE, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_tul_done_fields },
+ { FLM_STAT_UNL_DONE, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_done_fields },
+ { FLM_STAT_UNL_IGNORE, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_unl_ignore_fields },
+ { FLM_STA_DATA, 17, 96, NTHW_FPGA_REG_TYPE_RO, 0, 11, flm_sta_data_fields },
+};
+
static nthw_fpga_field_init_s gfg_burstsize0_fields[] = {
{ GFG_BURSTSIZE0_VAL, 24, 0, 0 },
};
@@ -1743,6 +2025,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
{ MOD_DBS, 0, MOD_DBS, 0, 11, NTHW_FPGA_BUS_TYPE_RAB2, 12832, 27, dbs_registers },
+ { MOD_FLM, 0, MOD_FLM, 0, 25, NTHW_FPGA_BUS_TYPE_RAB1, 1280, 43, flm_registers },
{ MOD_GFG, 0, MOD_GFG, 1, 1, NTHW_FPGA_BUS_TYPE_RAB2, 8704, 10, gfg_registers },
{ MOD_GMF, 0, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9216, 12, gmf_registers },
{ MOD_GMF, 1, MOD_GMF, 2, 5, NTHW_FPGA_BUS_TYPE_RAB2, 9728, 12, gmf_registers },
@@ -1817,7 +2100,6 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
- { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
@@ -1937,5 +2219,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 23, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 43/80] net/ntnic: add HFU module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (41 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 42/80] net/ntnic: add FLM " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 44/80] net/ntnic: add IFR " Serhii Iliushyk
` (36 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Header Field Update module updates protocol fields
if the packets have been changed,
for example length fields and next protocol fields.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 739cabfb1c..82068746b3 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -919,6 +919,41 @@ static nthw_fpga_register_init_s gpio_phy_registers[] = {
{ GPIO_PHY_GPIO, 1, 10, NTHW_FPGA_REG_TYPE_RW, 17, 10, gpio_phy_gpio_fields },
};
+static nthw_fpga_field_init_s hfu_rcp_ctrl_fields[] = {
+ { HFU_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { HFU_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s hfu_rcp_data_fields[] = {
+ { HFU_RCP_DATA_LEN_A_ADD_DYN, 5, 15, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_ADD_OFS, 8, 20, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_OL4LEN, 1, 1, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_DYN, 5, 2, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_POS_OFS, 8, 7, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_SUB_DYN, 5, 28, 0x0000 },
+ { HFU_RCP_DATA_LEN_A_WR, 1, 0, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_DYN, 5, 47, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_ADD_OFS, 8, 52, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_DYN, 5, 34, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_POS_OFS, 8, 39, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_SUB_DYN, 5, 60, 0x0000 },
+ { HFU_RCP_DATA_LEN_B_WR, 1, 33, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_DYN, 5, 79, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_ADD_OFS, 8, 84, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_DYN, 5, 66, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_POS_OFS, 8, 71, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_SUB_DYN, 5, 92, 0x0000 },
+ { HFU_RCP_DATA_LEN_C_WR, 1, 65, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_DYN, 5, 98, 0x0000 },
+ { HFU_RCP_DATA_TTL_POS_OFS, 8, 103, 0x0000 },
+ { HFU_RCP_DATA_TTL_WR, 1, 97, 0x0000 },
+};
+
+static nthw_fpga_register_init_s hfu_registers[] = {
+ { HFU_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, hfu_rcp_ctrl_fields },
+ { HFU_RCP_DATA, 1, 111, NTHW_FPGA_REG_TYPE_WO, 0, 22, hfu_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s hif_build_time_fields[] = {
{ HIF_BUILD_TIME_TIME, 32, 0, 1726740521 },
};
@@ -2033,6 +2068,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_GPIO_PHY, 0, MOD_GPIO_PHY, 1, 0, NTHW_FPGA_BUS_TYPE_RAB0, 16386, 2,
gpio_phy_registers
},
+ { MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
@@ -2219,5 +2255,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 24, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 44/80] net/ntnic: add IFR module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (42 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 43/80] net/ntnic: add HFU " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 45/80] net/ntnic: add MAC Rx " Serhii Iliushyk
` (35 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 82068746b3..509e1f6860 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1095,6 +1095,44 @@ static nthw_fpga_register_init_s hsh_registers[] = {
{ HSH_RCP_DATA, 1, 743, NTHW_FPGA_REG_TYPE_WO, 0, 23, hsh_rcp_data_fields },
};
+static nthw_fpga_field_init_s ifr_counters_ctrl_fields[] = {
+ { IFR_COUNTERS_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_COUNTERS_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_counters_data_fields[] = {
+ { IFR_COUNTERS_DATA_DROP, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_ctrl_fields[] = {
+ { IFR_DF_BUF_CTRL_AVAILABLE, 11, 0, 0x0000 },
+ { IFR_DF_BUF_CTRL_MTU_PROFILE, 16, 11, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_df_buf_data_fields[] = {
+ { IFR_DF_BUF_DATA_FIFO_DAT, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_ctrl_fields[] = {
+ { IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ifr_rcp_data_fields[] = {
+ { IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 }, { IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 }, { IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ifr_registers[] = {
+ { IFR_COUNTERS_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_counters_ctrl_fields },
+ { IFR_COUNTERS_DATA, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_counters_data_fields },
+ { IFR_DF_BUF_CTRL, 2, 27, NTHW_FPGA_REG_TYPE_RO, 0, 2, ifr_df_buf_ctrl_fields },
+ { IFR_DF_BUF_DATA, 3, 128, NTHW_FPGA_REG_TYPE_RO, 0, 1, ifr_df_buf_data_fields },
+ { IFR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ifr_rcp_ctrl_fields },
+ { IFR_RCP_DATA, 1, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, ifr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s iic_adr_fields[] = {
{ IIC_ADR_SLV_ADR, 7, 1, 0 },
};
@@ -2071,6 +2109,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_HFU, 0, MOD_HFU, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 9472, 2, hfu_registers },
{ MOD_HIF, 0, MOD_HIF, 0, 0, NTHW_FPGA_BUS_TYPE_PCI, 0, 18, hif_registers },
{ MOD_HSH, 0, MOD_HSH, 0, 5, NTHW_FPGA_BUS_TYPE_RAB1, 1536, 2, hsh_registers },
+ { MOD_IFR, 0, MOD_IFR, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 9984, 6, ifr_registers },
{ MOD_IIC, 0, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 768, 22, iic_registers },
{ MOD_IIC, 1, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 896, 22, iic_registers },
{ MOD_IIC, 2, MOD_IIC, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 24832, 22, iic_registers },
@@ -2255,5 +2294,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 25, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 45/80] net/ntnic: add MAC Rx module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (43 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 44/80] net/ntnic: add IFR " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 46/80] net/ntnic: add MAC Tx " Serhii Iliushyk
` (34 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The IP Fragmenter module can fragment outgoing packets
based on a programmable MTU.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 61 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_rx.h | 29 +++++++++
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 509e1f6860..eecd6342c0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1774,6 +1774,63 @@ static nthw_fpga_register_init_s mac_pcs_registers[] = {
},
};
+static nthw_fpga_field_init_s mac_rx_bad_fcs_fields[] = {
+ { MAC_RX_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_fragment_fields[] = {
+ { MAC_RX_FRAGMENT_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_bad_fcs_fields[] = {
+ { MAC_RX_PACKET_BAD_FCS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_packet_small_fields[] = {
+ { MAC_RX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_bytes_fields[] = {
+ { MAC_RX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_bytes_fields[] = {
+ { MAC_RX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_good_packets_fields[] = {
+ { MAC_RX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_total_packets_fields[] = {
+ { MAC_RX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_rx_undersize_fields[] = {
+ { MAC_RX_UNDERSIZE_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_rx_registers[] = {
+ { MAC_RX_BAD_FCS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_bad_fcs_fields },
+ { MAC_RX_FRAGMENT, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_fragment_fields },
+ {
+ MAC_RX_PACKET_BAD_FCS, 7, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_packet_bad_fcs_fields
+ },
+ { MAC_RX_PACKET_SMALL, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_packet_small_fields },
+ { MAC_RX_TOTAL_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_bytes_fields },
+ {
+ MAC_RX_TOTAL_GOOD_BYTES, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_bytes_fields
+ },
+ {
+ MAC_RX_TOTAL_GOOD_PACKETS, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_rx_total_good_packets_fields
+ },
+ { MAC_RX_TOTAL_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_total_packets_fields },
+ { MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2123,6 +2180,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
MOD_MAC_PCS, 1, MOD_MAC_PCS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB2, 11776, 44,
mac_pcs_registers
},
+ { MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
+ { MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2294,5 +2353,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 26, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index b6be02f45e..5983ba7095 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -29,6 +29,7 @@
#define MOD_IIC (0x7629cddbUL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
+#define MOD_MAC_RX (0x6347b490UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -43,7 +44,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (14)
+#define MOD_IDX_COUNT (31)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 3560eeda7d..5ebbec6c7e 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -30,6 +30,7 @@
#include "nthw_fpga_reg_defs_ins.h"
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
+#include "nthw_fpga_reg_defs_mac_rx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
new file mode 100644
index 0000000000..3829c10f3b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_rx.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_RX_
+#define _NTHW_FPGA_REG_DEFS_MAC_RX_
+
+/* MAC_RX */
+#define MAC_RX_BAD_FCS (0xca07f618UL)
+#define MAC_RX_BAD_FCS_COUNT (0x11d5ba0eUL)
+#define MAC_RX_FRAGMENT (0x5363b736UL)
+#define MAC_RX_FRAGMENT_COUNT (0xf664c9aUL)
+#define MAC_RX_PACKET_BAD_FCS (0x4cb8b34cUL)
+#define MAC_RX_PACKET_BAD_FCS_COUNT (0xb6701e28UL)
+#define MAC_RX_PACKET_SMALL (0xed318a65UL)
+#define MAC_RX_PACKET_SMALL_COUNT (0x72095ec7UL)
+#define MAC_RX_TOTAL_BYTES (0x831313e2UL)
+#define MAC_RX_TOTAL_BYTES_COUNT (0xe5d8be59UL)
+#define MAC_RX_TOTAL_GOOD_BYTES (0x912c2d1cUL)
+#define MAC_RX_TOTAL_GOOD_BYTES_COUNT (0x63bb5f3eUL)
+#define MAC_RX_TOTAL_GOOD_PACKETS (0xfbb4f497UL)
+#define MAC_RX_TOTAL_GOOD_PACKETS_COUNT (0xae9d21b0UL)
+#define MAC_RX_TOTAL_PACKETS (0xb0ea3730UL)
+#define MAC_RX_TOTAL_PACKETS_COUNT (0x532c885dUL)
+#define MAC_RX_UNDERSIZE (0xb6fa4bdbUL)
+#define MAC_RX_UNDERSIZE_COUNT (0x471945ffUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_RX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 46/80] net/ntnic: add MAC Tx module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (44 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 45/80] net/ntnic: add MAC Rx " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 47/80] net/ntnic: add RPP LR " Serhii Iliushyk
` (33 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Media Access Control Transmit module contains counters
that keep track on transmitted packets.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 38 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../supported/nthw_fpga_reg_defs_mac_tx.h | 21 ++++++++++
4 files changed, 61 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index eecd6342c0..7a2f5aec32 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1831,6 +1831,40 @@ static nthw_fpga_register_init_s mac_rx_registers[] = {
{ MAC_RX_UNDERSIZE, 8, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_rx_undersize_fields },
};
+static nthw_fpga_field_init_s mac_tx_packet_small_fields[] = {
+ { MAC_TX_PACKET_SMALL_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_bytes_fields[] = {
+ { MAC_TX_TOTAL_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_bytes_fields[] = {
+ { MAC_TX_TOTAL_GOOD_BYTES_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_good_packets_fields[] = {
+ { MAC_TX_TOTAL_GOOD_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s mac_tx_total_packets_fields[] = {
+ { MAC_TX_TOTAL_PACKETS_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s mac_tx_registers[] = {
+ { MAC_TX_PACKET_SMALL, 2, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_packet_small_fields },
+ { MAC_TX_TOTAL_BYTES, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_bytes_fields },
+ {
+ MAC_TX_TOTAL_GOOD_BYTES, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_bytes_fields
+ },
+ {
+ MAC_TX_TOTAL_GOOD_PACKETS, 1, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ mac_tx_total_good_packets_fields
+ },
+ { MAC_TX_TOTAL_PACKETS, 0, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, mac_tx_total_packets_fields },
+};
+
static nthw_fpga_field_init_s pci_rd_tg_tg_ctrl_fields[] = {
{ PCI_RD_TG_TG_CTRL_TG_RD_RDY, 1, 0, 0 },
};
@@ -2182,6 +2216,8 @@ static nthw_fpga_module_init_s fpga_modules[] = {
},
{ MOD_MAC_RX, 0, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 10752, 9, mac_rx_registers },
{ MOD_MAC_RX, 1, MOD_MAC_RX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12288, 9, mac_rx_registers },
+ { MOD_MAC_TX, 0, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 11264, 5, mac_tx_registers },
+ { MOD_MAC_TX, 1, MOD_MAC_TX, 0, 0, NTHW_FPGA_BUS_TYPE_RAB2, 12800, 5, mac_tx_registers },
{
MOD_PCI_RD_TG, 0, MOD_PCI_RD_TG, 0, 1, NTHW_FPGA_BUS_TYPE_RAB0, 2320, 6,
pci_rd_tg_registers
@@ -2353,5 +2389,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 28, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 5983ba7095..f4a913f3d2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -30,6 +30,7 @@
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
+#define MOD_MAC_TX (0x351d1316UL)
#define MOD_PCIE3 (0xfbc48c18UL)
#define MOD_PCI_RD_TG (0x9ad9eed2UL)
#define MOD_PCI_WR_TG (0x274b69e1UL)
@@ -44,7 +45,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (31)
+#define MOD_IDX_COUNT (32)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 5ebbec6c7e..7741aa563f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -31,6 +31,7 @@
#include "nthw_fpga_reg_defs_km.h"
#include "nthw_fpga_reg_defs_mac_pcs.h"
#include "nthw_fpga_reg_defs_mac_rx.h"
+#include "nthw_fpga_reg_defs_mac_tx.h"
#include "nthw_fpga_reg_defs_pcie3.h"
#include "nthw_fpga_reg_defs_pci_rd_tg.h"
#include "nthw_fpga_reg_defs_pci_wr_tg.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
new file mode 100644
index 0000000000..6a77d449ae
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_mac_tx.h
@@ -0,0 +1,21 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_MAC_TX_
+#define _NTHW_FPGA_REG_DEFS_MAC_TX_
+
+/* MAC_TX */
+#define MAC_TX_PACKET_SMALL (0xcfcb5e97UL)
+#define MAC_TX_PACKET_SMALL_COUNT (0x84345b01UL)
+#define MAC_TX_TOTAL_BYTES (0x7bd15854UL)
+#define MAC_TX_TOTAL_BYTES_COUNT (0x61fb238cUL)
+#define MAC_TX_TOTAL_GOOD_BYTES (0xcf0260fUL)
+#define MAC_TX_TOTAL_GOOD_BYTES_COUNT (0x8603398UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS (0xd89f151UL)
+#define MAC_TX_TOTAL_GOOD_PACKETS_COUNT (0x12c47c77UL)
+#define MAC_TX_TOTAL_PACKETS (0xe37b5ed4UL)
+#define MAC_TX_TOTAL_PACKETS_COUNT (0x21ddd2ddUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_MAC_TX_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 47/80] net/ntnic: add RPP LR module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (45 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 46/80] net/ntnic: add MAC Tx " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 48/80] net/ntnic: add SLC " Serhii Iliushyk
` (32 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The RX Packet Process for Local Retransmit module can add bytes
in the FPGA TX pipeline, which is needed when the packet increases in size.
Note, this makes room for packet expansion,
but the actual expansion is done by the modules.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 32 ++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 7a2f5aec32..33437da204 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2138,6 +2138,35 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
+ { RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_ifr_rcp_data_fields[] = {
+ { RPP_LR_IFR_RCP_DATA_IPV4_DF_DROP, 1, 17, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV4_EN, 1, 0, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_DROP, 1, 16, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_IPV6_EN, 1, 1, 0x0000 },
+ { RPP_LR_IFR_RCP_DATA_MTU, 14, 2, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_ctrl_fields[] = {
+ { RPP_LR_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPP_LR_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpp_lr_rcp_data_fields[] = {
+ { RPP_LR_RCP_DATA_EXP, 14, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpp_lr_registers[] = {
+ { RPP_LR_IFR_RCP_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_ifr_rcp_ctrl_fields },
+ { RPP_LR_IFR_RCP_DATA, 3, 18, NTHW_FPGA_REG_TYPE_WO, 0, 5, rpp_lr_ifr_rcp_data_fields },
+ { RPP_LR_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpp_lr_rcp_ctrl_fields },
+ { RPP_LR_RCP_DATA, 1, 14, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpp_lr_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s rst9563_ctrl_fields[] = {
{ RST9563_CTRL_PTP_MMCM_CLKSEL, 1, 2, 1 },
{ RST9563_CTRL_TS_CLKSEL, 1, 1, 1 },
@@ -2230,6 +2259,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_QSL, 0, MOD_QSL, 0, 7, NTHW_FPGA_BUS_TYPE_RAB1, 1792, 8, qsl_registers },
{ MOD_RAC, 0, MOD_RAC, 3, 0, NTHW_FPGA_BUS_TYPE_PCI, 8192, 14, rac_registers },
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
+ { MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
};
@@ -2389,5 +2419,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 30, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 48/80] net/ntnic: add SLC LR module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (46 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 47/80] net/ntnic: add RPP LR " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 49/80] net/ntnic: add Tx CPY " Serhii Iliushyk
` (31 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The Slicer for Local Retransmit module can cut of the head a packet
before the packet leaves the FPGA RX pipeline.
This is used when the TX pipeline is configured
to add a new head in the packet
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 20 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 33437da204..0f69f89527 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2214,6 +2214,23 @@ static nthw_fpga_register_init_s rst9563_registers[] = {
{ RST9563_STICKY, 3, 6, NTHW_FPGA_REG_TYPE_RC1, 0, 6, rst9563_sticky_fields },
};
+static nthw_fpga_field_init_s slc_rcp_ctrl_fields[] = {
+ { SLC_RCP_CTRL_ADR, 6, 0, 0x0000 },
+ { SLC_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s slc_rcp_data_fields[] = {
+ { SLC_RCP_DATA_HEAD_DYN, 5, 1, 0x0000 }, { SLC_RCP_DATA_HEAD_OFS, 8, 6, 0x0000 },
+ { SLC_RCP_DATA_HEAD_SLC_EN, 1, 0, 0x0000 }, { SLC_RCP_DATA_PCAP, 1, 35, 0x0000 },
+ { SLC_RCP_DATA_TAIL_DYN, 5, 15, 0x0000 }, { SLC_RCP_DATA_TAIL_OFS, 15, 20, 0x0000 },
+ { SLC_RCP_DATA_TAIL_SLC_EN, 1, 14, 0x0000 },
+};
+
+static nthw_fpga_register_init_s slc_registers[] = {
+ { SLC_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, slc_rcp_ctrl_fields },
+ { SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2261,6 +2278,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RMC, 0, MOD_RMC, 1, 3, NTHW_FPGA_BUS_TYPE_RAB0, 12288, 4, rmc_registers },
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
+ { MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2419,5 +2437,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 31, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index f4a913f3d2..865dd6a084 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,11 +41,12 @@
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
+#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (32)
+#define MOD_IDX_COUNT (33)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 49/80] net/ntnic: add Tx CPY module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (47 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 48/80] net/ntnic: add SLC " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 50/80] net/ntnic: add Tx INS " Serhii Iliushyk
` (30 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Copy module writes data to packet fields based on the lookup
performed by the FLM module.
This is used for NAT and can support other actions based
on the RTE action MODIFY_FIELD.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 204 +++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 205 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 0f69f89527..60fd748ea2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -270,6 +270,207 @@ static nthw_fpga_register_init_s cat_registers[] = {
{ CAT_RCK_DATA, 3, 32, NTHW_FPGA_REG_TYPE_WO, 0, 32, cat_rck_data_fields },
};
+static nthw_fpga_field_init_s cpy_packet_reader0_ctrl_fields[] = {
+ { CPY_PACKET_READER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_PACKET_READER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_packet_reader0_data_fields[] = {
+ { CPY_PACKET_READER0_DATA_DYN, 5, 10, 0x0000 },
+ { CPY_PACKET_READER0_DATA_OFS, 10, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_ctrl_fields[] = {
+ { CPY_WRITER0_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_data_fields[] = {
+ { CPY_WRITER0_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER0_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER0_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER0_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER0_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_ctrl_fields[] = {
+ { CPY_WRITER0_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER0_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer0_mask_data_fields[] = {
+ { CPY_WRITER0_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_ctrl_fields[] = {
+ { CPY_WRITER1_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_data_fields[] = {
+ { CPY_WRITER1_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER1_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER1_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER1_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER1_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_ctrl_fields[] = {
+ { CPY_WRITER1_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER1_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer1_mask_data_fields[] = {
+ { CPY_WRITER1_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_ctrl_fields[] = {
+ { CPY_WRITER2_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_data_fields[] = {
+ { CPY_WRITER2_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER2_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER2_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER2_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER2_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_ctrl_fields[] = {
+ { CPY_WRITER2_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER2_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer2_mask_data_fields[] = {
+ { CPY_WRITER2_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_ctrl_fields[] = {
+ { CPY_WRITER3_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_data_fields[] = {
+ { CPY_WRITER3_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER3_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER3_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER3_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER3_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_ctrl_fields[] = {
+ { CPY_WRITER3_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER3_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer3_mask_data_fields[] = {
+ { CPY_WRITER3_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_ctrl_fields[] = {
+ { CPY_WRITER4_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_data_fields[] = {
+ { CPY_WRITER4_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER4_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER4_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER4_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER4_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_ctrl_fields[] = {
+ { CPY_WRITER4_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER4_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer4_mask_data_fields[] = {
+ { CPY_WRITER4_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_ctrl_fields[] = {
+ { CPY_WRITER5_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_data_fields[] = {
+ { CPY_WRITER5_DATA_DYN, 5, 17, 0x0000 }, { CPY_WRITER5_DATA_LEN, 5, 22, 0x0000 },
+ { CPY_WRITER5_DATA_MASK_POINTER, 4, 27, 0x0000 }, { CPY_WRITER5_DATA_OFS, 14, 3, 0x0000 },
+ { CPY_WRITER5_DATA_READER_SELECT, 3, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_ctrl_fields[] = {
+ { CPY_WRITER5_MASK_CTRL_ADR, 4, 0, 0x0000 },
+ { CPY_WRITER5_MASK_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s cpy_writer5_mask_data_fields[] = {
+ { CPY_WRITER5_MASK_DATA_BYTE_MASK, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s cpy_registers[] = {
+ {
+ CPY_PACKET_READER0_CTRL, 24, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_ctrl_fields
+ },
+ {
+ CPY_PACKET_READER0_DATA, 25, 15, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_packet_reader0_data_fields
+ },
+ { CPY_WRITER0_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer0_ctrl_fields },
+ { CPY_WRITER0_DATA, 1, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer0_data_fields },
+ {
+ CPY_WRITER0_MASK_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer0_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER0_MASK_DATA, 3, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer0_mask_data_fields
+ },
+ { CPY_WRITER1_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer1_ctrl_fields },
+ { CPY_WRITER1_DATA, 5, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer1_data_fields },
+ {
+ CPY_WRITER1_MASK_CTRL, 6, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer1_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER1_MASK_DATA, 7, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer1_mask_data_fields
+ },
+ { CPY_WRITER2_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer2_ctrl_fields },
+ { CPY_WRITER2_DATA, 9, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer2_data_fields },
+ {
+ CPY_WRITER2_MASK_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer2_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER2_MASK_DATA, 11, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer2_mask_data_fields
+ },
+ { CPY_WRITER3_CTRL, 12, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer3_ctrl_fields },
+ { CPY_WRITER3_DATA, 13, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer3_data_fields },
+ {
+ CPY_WRITER3_MASK_CTRL, 14, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer3_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER3_MASK_DATA, 15, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer3_mask_data_fields
+ },
+ { CPY_WRITER4_CTRL, 16, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer4_ctrl_fields },
+ { CPY_WRITER4_DATA, 17, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer4_data_fields },
+ {
+ CPY_WRITER4_MASK_CTRL, 18, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer4_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER4_MASK_DATA, 19, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer4_mask_data_fields
+ },
+ { CPY_WRITER5_CTRL, 20, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, cpy_writer5_ctrl_fields },
+ { CPY_WRITER5_DATA, 21, 31, NTHW_FPGA_REG_TYPE_WO, 0, 5, cpy_writer5_data_fields },
+ {
+ CPY_WRITER5_MASK_CTRL, 22, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2,
+ cpy_writer5_mask_ctrl_fields
+ },
+ {
+ CPY_WRITER5_MASK_DATA, 23, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1,
+ cpy_writer5_mask_data_fields
+ },
+};
+
static nthw_fpga_field_init_s csu_rcp_ctrl_fields[] = {
{ CSU_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ CSU_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2279,6 +2480,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RPP_LR, 0, MOD_RPP_LR, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2304, 4, rpp_lr_registers },
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
+ { MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2437,5 +2639,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 32, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 865dd6a084..0ab5ae0310 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -15,6 +15,7 @@
#define MOD_UNKNOWN (0L)/* Unknown/uninitialized - keep this as the first element */
#define MOD_CAT (0x30b447c2UL)
+#define MOD_CPY (0x1ddc186fUL)
#define MOD_CSU (0x3f470787UL)
#define MOD_DBS (0x80b29727UL)
#define MOD_FLM (0xe7ba53a4UL)
@@ -46,7 +47,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (33)
+#define MOD_IDX_COUNT (34)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 50/80] net/ntnic: add Tx INS module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (48 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 49/80] net/ntnic: add Tx CPY " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 51/80] net/ntnic: add Tx RPL " Serhii Iliushyk
` (29 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Inserter module injects zeros into an offset of a packet,
effectively expanding the packet.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 19 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 ++-
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 60fd748ea2..c8841b1dc2 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -1457,6 +1457,22 @@ static nthw_fpga_register_init_s iic_registers[] = {
{ IIC_TX_FIFO_OCY, 69, 4, NTHW_FPGA_REG_TYPE_RO, 0, 1, iic_tx_fifo_ocy_fields },
};
+static nthw_fpga_field_init_s ins_rcp_ctrl_fields[] = {
+ { INS_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { INS_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s ins_rcp_data_fields[] = {
+ { INS_RCP_DATA_DYN, 5, 0, 0x0000 },
+ { INS_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { INS_RCP_DATA_OFS, 10, 5, 0x0000 },
+};
+
+static nthw_fpga_register_init_s ins_registers[] = {
+ { INS_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, ins_rcp_ctrl_fields },
+ { INS_RCP_DATA, 1, 23, NTHW_FPGA_REG_TYPE_WO, 0, 3, ins_rcp_data_fields },
+};
+
static nthw_fpga_field_init_s km_cam_ctrl_fields[] = {
{ KM_CAM_CTRL_ADR, 13, 0, 0x0000 },
{ KM_CAM_CTRL_CNT, 16, 16, 0x0000 },
@@ -2481,6 +2497,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_RST9563, 0, MOD_RST9563, 0, 5, NTHW_FPGA_BUS_TYPE_RAB0, 1024, 5, rst9563_registers },
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
+ { MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2639,5 +2656,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 33, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 0ab5ae0310..8c0c727e16 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -28,6 +28,7 @@
#define MOD_I2CM (0x93bc7780UL)
#define MOD_IFR (0x9b01f1e6UL)
#define MOD_IIC (0x7629cddbUL)
+#define MOD_INS (0x24df4b78UL)
#define MOD_KM (0xcfbd9dbeUL)
#define MOD_MAC_PCS (0x7abe24c7UL)
#define MOD_MAC_RX (0x6347b490UL)
@@ -47,7 +48,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (34)
+#define MOD_IDX_COUNT (35)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 51/80] net/ntnic: add Tx RPL module registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (49 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 50/80] net/ntnic: add Tx INS " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 52/80] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
` (28 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
The TX Replacer module can replace a range of bytes in a packet.
The replacing data is stored in a table in the module
and will often contain tunnel data.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 41 ++++++++++++++++++-
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 3 +-
2 files changed, 42 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index c8841b1dc2..a3d9f94fc6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2355,6 +2355,44 @@ static nthw_fpga_register_init_s rmc_registers[] = {
{ RMC_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, rmc_status_fields },
};
+static nthw_fpga_field_init_s rpl_ext_ctrl_fields[] = {
+ { RPL_EXT_CTRL_ADR, 10, 0, 0x0000 },
+ { RPL_EXT_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_ext_data_fields[] = {
+ { RPL_EXT_DATA_RPL_PTR, 12, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_ctrl_fields[] = {
+ { RPL_RCP_CTRL_ADR, 4, 0, 0x0000 },
+ { RPL_RCP_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rcp_data_fields[] = {
+ { RPL_RCP_DATA_DYN, 5, 0, 0x0000 }, { RPL_RCP_DATA_ETH_TYPE_WR, 1, 36, 0x0000 },
+ { RPL_RCP_DATA_EXT_PRIO, 1, 35, 0x0000 }, { RPL_RCP_DATA_LEN, 8, 15, 0x0000 },
+ { RPL_RCP_DATA_OFS, 10, 5, 0x0000 }, { RPL_RCP_DATA_RPL_PTR, 12, 23, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_ctrl_fields[] = {
+ { RPL_RPL_CTRL_ADR, 12, 0, 0x0000 },
+ { RPL_RPL_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s rpl_rpl_data_fields[] = {
+ { RPL_RPL_DATA_VALUE, 128, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s rpl_registers[] = {
+ { RPL_EXT_CTRL, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_ext_ctrl_fields },
+ { RPL_EXT_DATA, 3, 12, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_ext_data_fields },
+ { RPL_RCP_CTRL, 0, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rcp_ctrl_fields },
+ { RPL_RCP_DATA, 1, 37, NTHW_FPGA_REG_TYPE_WO, 0, 6, rpl_rcp_data_fields },
+ { RPL_RPL_CTRL, 4, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, rpl_rpl_ctrl_fields },
+ { RPL_RPL_DATA, 5, 128, NTHW_FPGA_REG_TYPE_WO, 0, 1, rpl_rpl_data_fields },
+};
+
static nthw_fpga_field_init_s rpp_lr_ifr_rcp_ctrl_fields[] = {
{ RPP_LR_IFR_RCP_CTRL_ADR, 4, 0, 0x0000 },
{ RPP_LR_IFR_RCP_CTRL_CNT, 16, 16, 0x0000 },
@@ -2498,6 +2536,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_SLC_LR, 0, MOD_SLC, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 2048, 2, slc_registers },
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
+ { MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2656,5 +2695,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 34, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 8c0c727e16..2b059d98ff 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -40,6 +40,7 @@
#define MOD_QSL (0x448ed859UL)
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
+#define MOD_RPL (0x6de535c3UL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
@@ -48,7 +49,7 @@
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
-#define MOD_IDX_COUNT (35)
+#define MOD_IDX_COUNT (36)
/* aliases - only aliases go below this point */
#endif /* _NTHW_FPGA_MOD_DEFS_H_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 52/80] net/ntnic: update alignment for virt queue structs
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (50 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 51/80] net/ntnic: add Tx RPL " Serhii Iliushyk
@ 2024-10-30 21:38 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 53/80] net/ntnic: enable RSS feature Serhii Iliushyk
` (27 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:38 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Update incorrect alignment
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v2
* Fix __rte_packed usage
Original NT PMD driver use pragma pack(1) wich is similar with
combination attributes packed and aligned
In this case aligned(1) can be ignored in case of use
attribute packed
---
drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
index bde0fed273..e46a3bef28 100644
--- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
+++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include <rte_common.h>
#include <unistd.h>
#include "ntos_drv.h"
@@ -67,20 +68,20 @@
} \
} while (0)
-struct __rte_aligned(8) virtq_avail {
+struct __rte_packed virtq_avail {
uint16_t flags;
uint16_t idx;
uint16_t ring[]; /* Queue Size */
};
-struct __rte_aligned(8) virtq_used_elem {
+struct __rte_packed virtq_used_elem {
/* Index of start of used descriptor chain. */
uint32_t id;
/* Total length of the descriptor chain which was used (written to) */
uint32_t len;
};
-struct __rte_aligned(8) virtq_used {
+struct __rte_packed virtq_used {
uint16_t flags;
uint16_t idx;
struct virtq_used_elem ring[]; /* Queue Size */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 53/80] net/ntnic: enable RSS feature
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (51 preceding siblings ...)
2024-10-30 21:38 ` [PATCH v5 52/80] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 54/80] net/ntnic: add statistics support Serhii Iliushyk
` (26 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Enable receive side scaling
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
v4
* Use RTE_MIN instead of the ternary operator.
---
doc/guides/nics/features/ntnic.ini | 3 +
doc/guides/nics/ntnic.rst | 7 ++
drivers/net/ntnic/include/create_elements.h | 1 +
drivers/net/ntnic/include/flow_api.h | 2 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 6 ++
.../profile_inline/flow_api_profile_inline.c | 43 +++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 75 +++++++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 73 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 ++
9 files changed, 217 insertions(+)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 4cb9509742..e5d5abd0ed 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -10,6 +10,8 @@ Link status = Y
Queue start/stop = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
Linux = Y
x86-64 = Y
@@ -37,3 +39,4 @@ port_id = Y
queue = Y
raw_decap = Y
raw_encap = Y
+rss = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index f2cb7a362a..4ed732d9f8 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -56,6 +56,13 @@ Features
- Exact match of 140 million flows and policies.
- Tunnel HW offload: Packet type, inner/outer RSS, IP and UDP checksum
verification.
+- RSS hash
+- RSS key update
+- RSS based on VLAN or 5-tuple.
+- RSS using different combinations of fields: L3 only, L4 only or both, and
+ source only, destination only or both.
+- Several RSS hash keys, one for each flow type.
+- Default RSS operation with no hash key specification.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index 70e6cad195..eaa578e72a 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -27,6 +27,7 @@ struct cnv_attr_s {
struct cnv_action_s {
struct rte_flow_action flow_actions[MAX_ACTIONS];
+ struct rte_flow_action_rss flow_rss;
struct flow_action_raw_encap encap;
struct flow_action_raw_decap decap;
struct rte_flow_action_queue queue;
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 2e96fa5bed..4a1525f237 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -114,6 +114,8 @@ struct flow_nic_dev {
struct flow_eth_dev *eth_base;
pthread_mutex_t mtx;
+ /* RSS hashing configuration */
+ struct nt_eth_rss_conf rss_conf;
/* next NIC linked list */
struct flow_nic_dev *next;
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index fc9c68ed1a..4847b2de99 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1027,6 +1027,12 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+
+ /*
+ * Other
+ */
+ .hw_mod_hsh_rcp_flush = hw_mod_hsh_rcp_flush,
+ .flow_nic_set_hasher_fields = flow_nic_set_hasher_fields,
};
void init_flow_filter(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 0232954bec..21d0df56e5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -603,6 +603,49 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RSS", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_rss rss_tmp;
+ const struct rte_flow_action_rss *rss =
+ memcpy_mask_if(&rss_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_rss));
+
+ if (rss->key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: RSS hash key length %u exceeds maximum value %u",
+ rss->key_len, MAX_RSS_KEY_LEN);
+ flow_nic_set_error(ERR_RSS_TOO_LONG_KEY, error);
+ return -1;
+ }
+
+ for (uint32_t i = 0; i < rss->queue_num; ++i) {
+ int hw_id = rx_queue_idx_to_hw_id(dev, rss->queue[i]);
+
+ fd->dst_id[fd->dst_num_avail].owning_port_id = dev->port;
+ fd->dst_id[fd->dst_num_avail].id = hw_id;
+ fd->dst_id[fd->dst_num_avail].type = PORT_VIRT;
+ fd->dst_id[fd->dst_num_avail].active = 1;
+ fd->dst_num_avail++;
+ }
+
+ fd->hsh.func = rss->func;
+ fd->hsh.types = rss->types;
+ fd->hsh.key = rss->key;
+ fd->hsh.key_len = rss->key_len;
+
+ NT_LOG(DBG, FILTER,
+ "Dev:%p: RSS func: %d, types: 0x%" PRIX64 ", key_len: %d",
+ dev, rss->func, rss->types, rss->key_len);
+
+ fd->full_offload = 0;
+ *num_queues += rss->queue_num;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_MARK:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MARK", dev);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index bfca8f28b1..91be894e87 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -214,6 +214,14 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_rx_pktlen = HW_MAX_PKT_LEN;
dev_info->max_mtu = MAX_MTU;
+ if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
+ dev_info->hash_key_size = MAX_RSS_KEY_LEN;
+
+ dev_info->rss_algo_capa = RTE_ETH_HASH_ALGO_CAPA_MASK(DEFAULT) |
+ RTE_ETH_HASH_ALGO_CAPA_MASK(TOEPLITZ);
+ }
+
if (internals->p_drv) {
dev_info->max_rx_queues = internals->nb_rx_queues;
dev_info->max_tx_queues = internals->nb_tx_queues;
@@ -1372,6 +1380,71 @@ promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
return 0;
}
+static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+ struct nt_eth_rss_conf tmp_rss_conf = { 0 };
+ const int hsh_idx = 0; /* hsh index 0 means the default receipt in HSH module */
+
+ if (rss_conf->rss_key != NULL) {
+ if (rss_conf->rss_key_len > MAX_RSS_KEY_LEN) {
+ NT_LOG(ERR, NTNIC,
+ "ERROR: - RSS hash key length %u exceeds maximum value %u",
+ rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ return -1;
+ }
+
+ rte_memcpy(&tmp_rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+ }
+
+ tmp_rss_conf.algorithm = rss_conf->algorithm;
+
+ tmp_rss_conf.rss_hf = rss_conf->rss_hf;
+ int res = flow_filter_ops->flow_nic_set_hasher_fields(ndev, hsh_idx, tmp_rss_conf);
+
+ if (res == 0) {
+ flow_filter_ops->hw_mod_hsh_rcp_flush(&ndev->be, hsh_idx, 1);
+ rte_memcpy(&ndev->rss_conf, &tmp_rss_conf, sizeof(struct nt_eth_rss_conf));
+
+ } else {
+ NT_LOG(ERR, NTNIC, "ERROR: - RSS hash update failed with error %i", res);
+ }
+
+ return res;
+}
+
+static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct flow_nic_dev *ndev = internals->flw_dev->ndev;
+
+ rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
+
+ rss_conf->rss_hf = ndev->rss_conf.rss_hf;
+
+ /*
+ * copy full stored key into rss_key and pad it with
+ * zeros up to rss_key_len / MAX_RSS_KEY_LEN
+ */
+ if (rss_conf->rss_key != NULL) {
+ int key_len = RTE_MIN(rss_conf->rss_key_len, MAX_RSS_KEY_LEN);
+ memset(rss_conf->rss_key, 0, rss_conf->rss_key_len);
+ rte_memcpy(rss_conf->rss_key, &ndev->rss_conf.rss_key, key_len);
+ rss_conf->rss_key_len = key_len;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
@@ -1395,6 +1468,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
.promiscuous_enable = promiscuous_enable,
+ .rss_hash_update = eth_dev_rss_hash_update,
+ .rss_hash_conf_get = rss_hash_conf_get,
};
/*
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 93d89d59f3..a435b60fb2 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -330,6 +330,79 @@ int create_action_elements_inline(struct cnv_action_s *action,
* Non-compatible actions handled here
*/
switch (type) {
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)actions[aidx].conf;
+
+ switch (rss->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ action->flow_rss.func =
+ (enum rte_eth_hash_function)
+ RTE_ETH_HASH_FUNCTION_TOEPLITZ;
+
+ if (rte_is_power_of_2(rss->queue_num) == 0) {
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - for Toeplitz the number of queues must be power of two");
+ return -1;
+ }
+
+ break;
+
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT:
+ case RTE_ETH_HASH_FUNCTION_MAX:
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported function: %u",
+ rss->func);
+ return -1;
+ }
+
+ uint64_t tmp_rss_types = 0;
+
+ switch (rss->level) {
+ case 1:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_OUTERMOST;
+ break;
+
+ case 2:
+ /* clear/override level mask specified at types */
+ tmp_rss_types = rss->types & (~RTE_ETH_RSS_LEVEL_MASK);
+ action->flow_rss.types =
+ tmp_rss_types | RTE_ETH_RSS_LEVEL_INNERMOST;
+ break;
+
+ case 0:
+ /* keep level mask specified at types */
+ action->flow_rss.types = rss->types;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER,
+ "RTE ACTION RSS - unsupported level: %u",
+ rss->level);
+ return -1;
+ }
+
+ action->flow_rss.level = 0;
+ action->flow_rss.key_len = rss->key_len;
+ action->flow_rss.queue_num = rss->queue_num;
+ action->flow_rss.key = rss->key;
+ action->flow_rss.queue = rss->queue;
+ action->flow_actions[aidx].conf = &action->flow_rss;
+ }
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_DECAP: {
const struct rte_flow_action_raw_decap *decap =
(const struct rte_flow_action_raw_decap *)actions[aidx]
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 12baa13800..e40ed9b949 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -316,6 +316,13 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+
+ /*
+ * Other
+ */
+ int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
+ struct nt_eth_rss_conf rss_conf);
+ int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 54/80] net/ntnic: add statistics support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (52 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 53/80] net/ntnic: enable RSS feature Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 55/80] net/ntnic: add rpf module Serhii Iliushyk
` (25 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Statistics init, setup, get, reset, and their
implementation was added.
Statistics FPGA defines were added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/adapter/nt4ga_adapter.c | 29 +-
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 192 +++++++++
.../net/ntnic/include/common_adapter_defs.h | 15 +
drivers/net/ntnic/include/create_elements.h | 4 +
drivers/net/ntnic/include/nt4ga_adapter.h | 2 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 149 +++++++
drivers/net/ntnic/include/ntos_drv.h | 9 +
.../ntnic/include/stream_binary_flow_api.h | 5 +
drivers/net/ntnic/meson.build | 3 +
.../net/ntnic/nthw/core/include/nthw_rmc.h | 1 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 10 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 370 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 40 ++
drivers/net/ntnic/ntnic_ethdev.c | 119 +++++-
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 132 +++++++
drivers/net/ntnic/ntnic_mod_reg.c | 30 ++
drivers/net/ntnic/ntnic_mod_reg.h | 17 +
drivers/net/ntnic/ntutil/nt_util.h | 1 +
21 files changed, 1119 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
create mode 100644 drivers/net/ntnic/include/common_adapter_defs.h
create mode 100644 drivers/net/ntnic/nthw/stat/nthw_stat.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_adapter.c b/drivers/net/ntnic/adapter/nt4ga_adapter.c
index d9e6716c30..fa72dfda8d 100644
--- a/drivers/net/ntnic/adapter/nt4ga_adapter.c
+++ b/drivers/net/ntnic/adapter/nt4ga_adapter.c
@@ -212,19 +212,26 @@ static int nt4ga_adapter_init(struct adapter_info_s *p_adapter_info)
}
}
- nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
- if (p_nthw_rmc == NULL) {
- NT_LOG(ERR, NTNIC, "Failed to allocate memory for RMC module");
- return -1;
- }
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
- res = nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
- if (res) {
- NT_LOG(ERR, NTNIC, "Failed to initialize RMC module");
- return -1;
- }
+ if (nt4ga_stat_ops != NULL) {
+ /* Nt4ga Stat init/setup */
+ res = nt4ga_stat_ops->nt4ga_stat_init(p_adapter_info);
+
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot initialize the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+
+ res = nt4ga_stat_ops->nt4ga_stat_setup(p_adapter_info);
- nthw_rmc_unblock(p_nthw_rmc, false);
+ if (res != 0) {
+ NT_LOG(ERR, NTNIC, "%s: Cannot setup the statistics module",
+ p_adapter_id_str);
+ return res;
+ }
+ }
return 0;
}
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
new file mode 100644
index 0000000000..0e20f3ea45
--- /dev/null
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -0,0 +1,192 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+#include "nt_util.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "nthw_fpga_param_defs.h"
+#include "nt4ga_adapter.h"
+#include "ntnic_nim.h"
+#include "flow_filter.h"
+#include "ntnic_mod_reg.h"
+
+#define DEFAULT_MAX_BPS_SPEED 100e9
+
+static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
+{
+ const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
+ fpga_info_t *fpga_info = &p_adapter_info->fpga_info;
+ nthw_fpga_t *p_fpga = fpga_info->mp_fpga;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+
+ if (p_nt4ga_stat) {
+ memset(p_nt4ga_stat, 0, sizeof(nt4ga_stat_t));
+
+ } else {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ {
+ nthw_stat_t *p_nthw_stat = nthw_stat_new();
+
+ if (!p_nthw_stat) {
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ if (nthw_rmc_init(NULL, p_fpga, 0) == 0) {
+ nthw_rmc_t *p_nthw_rmc = nthw_rmc_new();
+
+ if (!p_nthw_rmc) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rmc_init(p_nthw_rmc, p_fpga, 0);
+ p_nt4ga_stat->mp_nthw_rmc = p_nthw_rmc;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rmc = NULL;
+ }
+
+ p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
+ nthw_stat_init(p_nthw_stat, p_fpga, 0);
+
+ p_nt4ga_stat->mn_rx_host_buffers = p_nthw_stat->m_nb_rx_host_buffers;
+ p_nt4ga_stat->mn_tx_host_buffers = p_nthw_stat->m_nb_tx_host_buffers;
+
+ p_nt4ga_stat->mn_rx_ports = p_nthw_stat->m_nb_rx_ports;
+ p_nt4ga_stat->mn_tx_ports = p_nthw_stat->m_nb_tx_ports;
+ }
+
+ return 0;
+}
+
+static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
+{
+ const int n_physical_adapter_no = p_adapter_info->adapter_no;
+ (void)n_physical_adapter_no;
+ nt4ga_stat_t *p_nt4ga_stat = &p_adapter_info->nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+
+ /* Allocate and map memory for fpga statistics */
+ {
+ uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
+ sizeof(p_nthw_stat->mp_timestamp));
+ struct nt_dma_s *p_dma;
+ int numa_node = p_adapter_info->fpga_info.numa_node;
+
+ /* FPGA needs a 16K alignment on Statistics */
+ p_dma = nt_dma_alloc(n_stat_size, 0x4000, numa_node);
+
+ if (!p_dma) {
+ NT_LOG_DBGX(ERR, NTNIC, "p_dma alloc failed");
+ return -1;
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%x @%d %" PRIx64 " %" PRIx64, n_stat_size, numa_node,
+ p_dma->addr, p_dma->iova);
+
+ NT_LOG(DBG, NTNIC,
+ "DMA: Physical adapter %02d, PA = 0x%016" PRIX64 " DMA = 0x%016" PRIX64
+ " size = 0x%" PRIX32 "",
+ n_physical_adapter_no, p_dma->iova, p_dma->addr, n_stat_size);
+
+ p_nt4ga_stat->p_stat_dma_virtual = (uint32_t *)p_dma->addr;
+ p_nt4ga_stat->n_stat_size = n_stat_size;
+ p_nt4ga_stat->p_stat_dma = p_dma;
+
+ memset(p_nt4ga_stat->p_stat_dma_virtual, 0xaa, n_stat_size);
+ nthw_stat_set_dma_address(p_nthw_stat, p_dma->iova,
+ p_nt4ga_stat->p_stat_dma_virtual);
+ }
+
+ if (p_nt4ga_stat->mp_nthw_rmc)
+ nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+
+ p_nt4ga_stat->mp_stat_structs_color =
+ calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_color) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_hb =
+ calloc(p_nt4ga_stat->mn_rx_host_buffers + p_nt4ga_stat->mn_tx_host_buffers,
+ sizeof(struct host_buffer_counters));
+
+ if (!p_nt4ga_stat->mp_stat_structs_hb) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_rx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_counters_v2));
+
+ if (!p_nt4ga_stat->cap.mp_stat_structs_port_tx) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_port_load =
+ calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
+
+ if (!p_nt4ga_stat->mp_port_load) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+#ifdef NIM_TRIGGER
+ uint64_t max_bps_speed = nt_get_max_link_speed(p_adapter_info->nt4ga_link.speed_capa);
+
+ if (max_bps_speed == 0)
+ max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+
+#else
+ uint64_t max_bps_speed = DEFAULT_MAX_BPS_SPEED;
+ NT_LOG(ERR, NTNIC, "NIM module not included");
+#endif
+
+ for (int p = 0; p < NUM_ADAPTER_PORTS_MAX; p++) {
+ p_nt4ga_stat->mp_port_load[p].rx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].tx_bps_max = max_bps_speed;
+ p_nt4ga_stat->mp_port_load[p].rx_pps_max = max_bps_speed / (8 * (20 + 64));
+ p_nt4ga_stat->mp_port_load[p].tx_pps_max = max_bps_speed / (8 * (20 + 64));
+ }
+
+ memset(p_nt4ga_stat->a_stat_structs_color_base, 0,
+ sizeof(struct color_counters) * NT_MAX_COLOR_FLOW_STATS);
+ p_nt4ga_stat->last_timestamp = 0;
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ return 0;
+}
+
+static struct nt4ga_stat_ops ops = {
+ .nt4ga_stat_init = nt4ga_stat_init,
+ .nt4ga_stat_setup = nt4ga_stat_setup,
+};
+
+void nt4ga_stat_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "Stat module was initialized");
+ register_nt4ga_stat_ops(&ops);
+}
diff --git a/drivers/net/ntnic/include/common_adapter_defs.h b/drivers/net/ntnic/include/common_adapter_defs.h
new file mode 100644
index 0000000000..6ed9121f0f
--- /dev/null
+++ b/drivers/net/ntnic/include/common_adapter_defs.h
@@ -0,0 +1,15 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef _COMMON_ADAPTER_DEFS_H_
+#define _COMMON_ADAPTER_DEFS_H_
+
+/*
+ * Declarations shared by NT adapter types.
+ */
+#define NUM_ADAPTER_MAX (8)
+#define NUM_ADAPTER_PORTS_MAX (128)
+
+#endif /* _COMMON_ADAPTER_DEFS_H_ */
diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h
index eaa578e72a..1456977837 100644
--- a/drivers/net/ntnic/include/create_elements.h
+++ b/drivers/net/ntnic/include/create_elements.h
@@ -46,6 +46,10 @@ struct rte_flow {
uint32_t flow_stat_id;
+ uint64_t stat_pkts;
+ uint64_t stat_bytes;
+ uint8_t stat_tcp_flags;
+
uint16_t caller_id;
};
diff --git a/drivers/net/ntnic/include/nt4ga_adapter.h b/drivers/net/ntnic/include/nt4ga_adapter.h
index 809135f130..fef79ce358 100644
--- a/drivers/net/ntnic/include/nt4ga_adapter.h
+++ b/drivers/net/ntnic/include/nt4ga_adapter.h
@@ -6,6 +6,7 @@
#ifndef _NT4GA_ADAPTER_H_
#define _NT4GA_ADAPTER_H_
+#include "ntnic_stat.h"
#include "nt4ga_link.h"
typedef struct hw_info_s {
@@ -30,6 +31,7 @@ typedef struct hw_info_s {
#include "ntnic_stat.h"
typedef struct adapter_info_s {
+ struct nt4ga_stat_s nt4ga_stat;
struct nt4ga_filter_s nt4ga_filter;
struct nt4ga_link_s nt4ga_link;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 8ebdd98db0..1135e9a539 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -15,6 +15,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
+ pthread_mutex_t stat_lck;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 148088fe1d..2aee3f8425 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -6,6 +6,155 @@
#ifndef NTNIC_STAT_H_
#define NTNIC_STAT_H_
+#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_fpga_model.h"
+
+#define NT_MAX_COLOR_FLOW_STATS 0x400
+
+struct nthw_stat {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_stat;
+ int mn_instance;
+
+ int mn_stat_layout_version;
+
+ bool mb_has_tx_stats;
+
+ int m_nb_phy_ports;
+ int m_nb_nim_ports;
+
+ int m_nb_rx_ports;
+ int m_nb_tx_ports;
+
+ int m_nb_rx_host_buffers;
+ int m_nb_tx_host_buffers;
+
+ int m_dbs_present;
+
+ int m_rx_port_replicate;
+
+ int m_nb_color_counters;
+
+ int m_nb_rx_hb_counters;
+ int m_nb_tx_hb_counters;
+
+ int m_nb_rx_port_counters;
+ int m_nb_tx_port_counters;
+
+ int m_nb_counters;
+
+ int m_nb_rpp_per_ps;
+
+ nthw_field_t *mp_fld_dma_ena;
+ nthw_field_t *mp_fld_cnt_clear;
+
+ nthw_field_t *mp_fld_tx_disable;
+
+ nthw_field_t *mp_fld_cnt_freeze;
+
+ nthw_field_t *mp_fld_stat_toggle_missed;
+
+ nthw_field_t *mp_fld_dma_lsb;
+ nthw_field_t *mp_fld_dma_msb;
+
+ nthw_field_t *mp_fld_load_bin;
+ nthw_field_t *mp_fld_load_bps_rx0;
+ nthw_field_t *mp_fld_load_bps_rx1;
+ nthw_field_t *mp_fld_load_bps_tx0;
+ nthw_field_t *mp_fld_load_bps_tx1;
+ nthw_field_t *mp_fld_load_pps_rx0;
+ nthw_field_t *mp_fld_load_pps_rx1;
+ nthw_field_t *mp_fld_load_pps_tx0;
+ nthw_field_t *mp_fld_load_pps_tx1;
+
+ uint64_t m_stat_dma_physical;
+ uint32_t *mp_stat_dma_virtual;
+
+ uint64_t *mp_timestamp;
+};
+
+typedef struct nthw_stat nthw_stat_t;
+typedef struct nthw_stat nthw_stat;
+
+struct color_counters {
+ uint64_t color_packets;
+ uint64_t color_bytes;
+ uint8_t tcp_flags;
+};
+
+struct host_buffer_counters {
+};
+
+struct port_load_counters {
+ uint64_t rx_pps_max;
+ uint64_t tx_pps_max;
+ uint64_t rx_bps_max;
+ uint64_t tx_bps_max;
+};
+
+struct port_counters_v2 {
+};
+
+struct flm_counters_v1 {
+};
+
+struct nt4ga_stat_s {
+ nthw_stat_t *mp_nthw_stat;
+ nthw_rmc_t *mp_nthw_rmc;
+ struct nt_dma_s *p_stat_dma;
+ uint32_t *p_stat_dma_virtual;
+ uint32_t n_stat_size;
+
+ uint64_t last_timestamp;
+
+ int mn_rx_host_buffers;
+ int mn_tx_host_buffers;
+
+ int mn_rx_ports;
+ int mn_tx_ports;
+
+ struct color_counters *mp_stat_structs_color;
+ /* For calculating increments between stats polls */
+ struct color_counters a_stat_structs_color_base[NT_MAX_COLOR_FLOW_STATS];
+
+ /* Port counters for inline */
+ struct {
+ struct port_counters_v2 *mp_stat_structs_port_rx;
+ struct port_counters_v2 *mp_stat_structs_port_tx;
+ } cap;
+
+ struct host_buffer_counters *mp_stat_structs_hb;
+ struct port_load_counters *mp_port_load;
+
+ /* Rx/Tx totals: */
+ uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
+
+ uint64_t a_port_rx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ /* Base is for calculating increments between statistics reads */
+ uint64_t a_port_rx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_packets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_packets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_rx_drops_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_rx_drops_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_octets_total[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_octets_base[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
+ uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+};
+
+typedef struct nt4ga_stat_s nt4ga_stat_t;
+
+nthw_stat_t *nthw_stat_new(void);
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_stat_delete(nthw_stat_t *p);
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual);
+int nthw_stat_trigger(nthw_stat_t *p);
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 8fd577dfe3..7b3c8ff3d6 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -57,6 +57,9 @@ struct __rte_cache_aligned ntnic_rx_queue {
struct flow_queue_id_s queue; /* queue info - user id and hw queue index */
struct rte_mempool *mb_pool; /* mbuf memory pool */
uint16_t buf_size; /* Size of data area in mbuf */
+ unsigned long rx_pkts; /* Rx packet statistics */
+ unsigned long rx_bytes; /* Rx bytes statistics */
+ unsigned long err_pkts; /* Rx error packet statistics */
int enabled; /* Enabling/disabling of this queue */
struct hwq_s hwq;
@@ -80,6 +83,9 @@ struct __rte_cache_aligned ntnic_tx_queue {
int rss_target_id;
uint32_t port; /* Tx port for this queue */
+ unsigned long tx_pkts; /* Tx packet statistics */
+ unsigned long tx_bytes; /* Tx bytes statistics */
+ unsigned long err_pkts; /* Tx error packet stat */
int enabled; /* Enabling/disabling of this queue */
enum fpga_info_profile profile; /* Inline / Capture */
};
@@ -95,6 +101,7 @@ struct pmd_internals {
/* Offset of the VF from the PF */
uint8_t vf_offset;
uint32_t port;
+ uint32_t port_id;
nt_meta_port_type_t type;
struct flow_queue_id_s vpq[MAX_QUEUES];
unsigned int vpq_nb_vq;
@@ -107,6 +114,8 @@ struct pmd_internals {
struct rte_ether_addr eth_addrs[NUM_MAC_ADDRS_PER_PORT];
/* Multicast ethernet (MAC) addresses. */
struct rte_ether_addr mc_addrs[NUM_MULTICAST_ADDRS_PER_PORT];
+ uint64_t last_stat_rtc;
+ uint64_t rx_missed;
struct pmd_internals *next;
};
diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h
index e5fe686d99..4ce1561033 100644
--- a/drivers/net/ntnic/include/stream_binary_flow_api.h
+++ b/drivers/net/ntnic/include/stream_binary_flow_api.h
@@ -6,6 +6,7 @@
#ifndef _STREAM_BINARY_FLOW_API_H_
#define _STREAM_BINARY_FLOW_API_H_
+#include <rte_ether.h>
#include "rte_flow.h"
#include "rte_flow_driver.h"
@@ -44,6 +45,10 @@
#define FLOW_MAX_QUEUES 128
#define RAW_ENCAP_DECAP_ELEMS_MAX 16
+
+extern uint64_t rte_tsc_freq;
+extern rte_spinlock_t hwlock;
+
/*
* Flow eth dev profile determines how the FPGA module resources are
* managed and what features are available
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 92167d24e4..216341bb11 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -25,10 +25,12 @@ includes = [
# all sources
sources = files(
'adapter/nt4ga_adapter.c',
+ 'adapter/nt4ga_stat/nt4ga_stat.c',
'dbsconfig/ntnic_dbsconfig.c',
'link_mgmt/link_100g/nt4ga_link_100g.c',
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
+ 'ntnic_filter/ntnic_filter.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
@@ -48,6 +50,7 @@ sources = files(
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
+ 'nthw/stat/nthw_stat.c',
'nthw/flow_api/flow_api.c',
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index 2345820bdc..b239752674 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -44,6 +44,7 @@ typedef struct nthw_rmc nthw_rmc;
nthw_rmc_t *nthw_rmc_new(void);
int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 4a01424c24..748519aeb4 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,16 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+void nthw_rmc_block(nthw_rmc_t *p)
+{
+ /* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
+ if (!p->mb_administrative_block) {
+ nthw_field_set_flush(p->mp_fld_ctrl_block_stat_drop);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_keep_alive);
+ nthw_field_set_flush(p->mp_fld_ctrl_block_mac_port);
+ }
+}
+
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary)
{
uint32_t n_block_mask = ~0U << (b_is_secondary ? p->mn_nims : p->mn_ports);
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
new file mode 100644
index 0000000000..6adcd2e090
--- /dev/null
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -0,0 +1,370 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "nt_util.h"
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "ntnic_stat.h"
+
+#include <malloc.h>
+
+nthw_stat_t *nthw_stat_new(void)
+{
+ nthw_stat_t *p = malloc(sizeof(nthw_stat_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_stat_t));
+
+ return p;
+}
+
+void nthw_stat_delete(nthw_stat_t *p)
+{
+ if (p)
+ free(p);
+}
+
+int nthw_stat_init(nthw_stat_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ uint64_t n_module_version_packed64 = -1;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_STA, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: STAT %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_stat = mod;
+
+ n_module_version_packed64 = nthw_module_get_version_packed64(p->mp_mod_stat);
+ NT_LOG(DBG, NTHW, "%s: STAT %d: version=0x%08lX", p_adapter_id_str, p->mn_instance,
+ n_module_version_packed64);
+
+ {
+ nthw_register_t *p_reg;
+ /* STA_CFG register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_CFG);
+ p->mp_fld_dma_ena = nthw_register_get_field(p_reg, STA_CFG_DMA_ENA);
+ p->mp_fld_cnt_clear = nthw_register_get_field(p_reg, STA_CFG_CNT_CLEAR);
+
+ /* CFG: fields NOT available from v. 3 */
+ p->mp_fld_tx_disable = nthw_register_query_field(p_reg, STA_CFG_TX_DISABLE);
+ p->mp_fld_cnt_freeze = nthw_register_query_field(p_reg, STA_CFG_CNT_FRZ);
+
+ /* STA_STATUS register */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_STATUS);
+ p->mp_fld_stat_toggle_missed =
+ nthw_register_get_field(p_reg, STA_STATUS_STAT_TOGGLE_MISSED);
+
+ /* HOST_ADR registers */
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_LSB);
+ p->mp_fld_dma_lsb = nthw_register_get_field(p_reg, STA_HOST_ADR_LSB_LSB);
+
+ p_reg = nthw_module_get_register(p->mp_mod_stat, STA_HOST_ADR_MSB);
+ p->mp_fld_dma_msb = nthw_register_get_field(p_reg, STA_HOST_ADR_MSB_MSB);
+
+ /* Binning cycles */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BIN);
+
+ if (p_reg) {
+ p->mp_fld_load_bin = nthw_register_get_field(p_reg, STA_LOAD_BIN_BIN);
+
+ /* Bandwidth load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx0 = NULL;
+ }
+
+ /* Bandwidth load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_RX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_rx1 = NULL;
+ }
+
+ /* Bandwidth load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_0_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx0 = NULL;
+ }
+
+ /* Bandwidth load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_BPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_bps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_BPS_TX_1_BPS);
+
+ } else {
+ p->mp_fld_load_bps_tx1 = NULL;
+ }
+
+ /* Packet load for RX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx0 = NULL;
+ }
+
+ /* Packet load for RX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_RX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_rx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_RX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_rx1 = NULL;
+ }
+
+ /* Packet load for TX port 0 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_0);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx0 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_0_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx0 = NULL;
+ }
+
+ /* Packet load for TX port 1 */
+ p_reg = nthw_module_query_register(p->mp_mod_stat, STA_LOAD_PPS_TX_1);
+
+ if (p_reg) {
+ p->mp_fld_load_pps_tx1 =
+ nthw_register_get_field(p_reg, STA_LOAD_PPS_TX_1_PPS);
+
+ } else {
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+
+ } else {
+ p->mp_fld_load_bin = NULL;
+ p->mp_fld_load_bps_rx0 = NULL;
+ p->mp_fld_load_bps_rx1 = NULL;
+ p->mp_fld_load_bps_tx0 = NULL;
+ p->mp_fld_load_bps_tx1 = NULL;
+ p->mp_fld_load_pps_rx0 = NULL;
+ p->mp_fld_load_pps_rx1 = NULL;
+ p->mp_fld_load_pps_tx0 = NULL;
+ p->mp_fld_load_pps_tx1 = NULL;
+ }
+ }
+
+ /* Params */
+ p->m_nb_nim_ports = nthw_fpga_get_product_param(p_fpga, NT_NIMS, 0);
+ p->m_nb_phy_ports = nthw_fpga_get_product_param(p_fpga, NT_PHY_PORTS, 0);
+
+ /* VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_STA_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_RX_PORTS, -1);
+
+ if (p->m_nb_rx_ports == -1) {
+ /* non-VSWITCH */
+ p->m_nb_rx_ports = nthw_fpga_get_product_param(p_fpga, NT_PORTS, 0);
+ }
+ }
+
+ p->m_nb_rpp_per_ps = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+
+ p->m_nb_tx_ports = nthw_fpga_get_product_param(p_fpga, NT_TX_PORTS, 0);
+ p->m_rx_port_replicate = nthw_fpga_get_product_param(p_fpga, NT_RX_PORT_REPLICATE, 0);
+
+ /* VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_STA_COLORS, 64) * 2;
+
+ if (p->m_nb_color_counters == 0) {
+ /* non-VSWITCH */
+ p->m_nb_color_counters = nthw_fpga_get_product_param(p_fpga, NT_CAT_FUNCS, 0) * 2;
+ }
+
+ p->m_nb_rx_host_buffers = nthw_fpga_get_product_param(p_fpga, NT_QUEUES, 0);
+ p->m_nb_tx_host_buffers = p->m_nb_rx_host_buffers;
+
+ p->m_dbs_present = nthw_fpga_get_product_param(p_fpga, NT_DBS_PRESENT, 0);
+
+ p->m_nb_rx_hb_counters = (p->m_nb_rx_host_buffers * (6 + 2 *
+ (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ?
+ p->m_dbs_present : 0)));
+
+ p->m_nb_tx_hb_counters = 0;
+
+ p->m_nb_rx_port_counters = 42 +
+ 2 * (n_module_version_packed64 >= VERSION_PACKED64(0, 6) ? p->m_dbs_present : 0);
+ p->m_nb_tx_port_counters = 0;
+
+ p->m_nb_counters =
+ p->m_nb_color_counters + p->m_nb_rx_hb_counters + p->m_nb_tx_hb_counters;
+
+ p->mn_stat_layout_version = 0;
+
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 9)) {
+ p->mn_stat_layout_version = 7;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 8)) {
+ p->mn_stat_layout_version = 6;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->mn_stat_layout_version = 5;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 4)) {
+ p->mn_stat_layout_version = 4;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 3)) {
+ p->mn_stat_layout_version = 3;
+
+ } else if (n_module_version_packed64 >= VERSION_PACKED64(0, 2)) {
+ p->mn_stat_layout_version = 2;
+
+ } else if (n_module_version_packed64 > VERSION_PACKED64(0, 0)) {
+ p->mn_stat_layout_version = 1;
+
+ } else {
+ p->mn_stat_layout_version = 0;
+ NT_LOG(ERR, NTHW, "%s: unknown module_version 0x%08lX layout=%d",
+ p_adapter_id_str, n_module_version_packed64, p->mn_stat_layout_version);
+ }
+
+ assert(p->mn_stat_layout_version);
+
+ /* STA module 0.2+ adds IPF counters per port (Rx feature) */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 2))
+ p->m_nb_rx_port_counters += 6;
+
+ /* STA module 0.3+ adds TX stats */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3) || p->m_nb_tx_ports >= 1)
+ p->mb_has_tx_stats = true;
+
+ /* STA module 0.3+ adds TX stat counters */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 3))
+ p->m_nb_tx_port_counters += 22;
+
+ /* STA module 0.4+ adds TX drop event counter */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 4))
+ p->m_nb_tx_port_counters += 1; /* TX drop event counter */
+
+ /*
+ * STA module 0.6+ adds pkt filter drop octets+pkts, retransmit and
+ * duplicate counters
+ */
+ if (n_module_version_packed64 >= VERSION_PACKED64(0, 6)) {
+ p->m_nb_rx_port_counters += 4;
+ p->m_nb_tx_port_counters += 1;
+ }
+
+ p->m_nb_counters += (p->m_nb_rx_ports * p->m_nb_rx_port_counters);
+
+ if (p->mb_has_tx_stats)
+ p->m_nb_counters += (p->m_nb_tx_ports * p->m_nb_tx_port_counters);
+
+ /* Output params (debug) */
+ NT_LOG(DBG, NTHW, "%s: nims=%d rxports=%d txports=%d rxrepl=%d colors=%d queues=%d",
+ p_adapter_id_str, p->m_nb_nim_ports, p->m_nb_rx_ports, p->m_nb_tx_ports,
+ p->m_rx_port_replicate, p->m_nb_color_counters, p->m_nb_rx_host_buffers);
+ NT_LOG(DBG, NTHW, "%s: hbs=%d hbcounters=%d rxcounters=%d txcounters=%d",
+ p_adapter_id_str, p->m_nb_rx_host_buffers, p->m_nb_rx_hb_counters,
+ p->m_nb_rx_port_counters, p->m_nb_tx_port_counters);
+ NT_LOG(DBG, NTHW, "%s: layout=%d", p_adapter_id_str, p->mn_stat_layout_version);
+ NT_LOG(DBG, NTHW, "%s: counters=%d (0x%X)", p_adapter_id_str, p->m_nb_counters,
+ p->m_nb_counters);
+
+ /* Init */
+ if (p->mp_fld_tx_disable)
+ nthw_field_set_flush(p->mp_fld_tx_disable);
+
+ nthw_field_update_register(p->mp_fld_cnt_clear);
+ nthw_field_set_flush(p->mp_fld_cnt_clear);
+ nthw_field_clr_flush(p->mp_fld_cnt_clear);
+
+ nthw_field_update_register(p->mp_fld_stat_toggle_missed);
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_clr_flush(p->mp_fld_dma_ena);
+ nthw_field_update_register(p->mp_fld_dma_ena);
+
+ /* Set the sliding windows size for port load */
+ if (p->mp_fld_load_bin) {
+ uint32_t rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ uint32_t bin =
+ (uint32_t)(((PORT_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) -
+ 1ULL);
+ nthw_field_set_val_flush32(p->mp_fld_load_bin, bin);
+ }
+
+ return 0;
+}
+
+int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
+ uint32_t *p_stat_dma_virtual)
+{
+ assert(p_stat_dma_virtual);
+ p->mp_timestamp = NULL;
+
+ p->m_stat_dma_physical = stat_dma_physical;
+ p->mp_stat_dma_virtual = p_stat_dma_virtual;
+
+ memset(p->mp_stat_dma_virtual, 0, (p->m_nb_counters * sizeof(uint32_t)));
+
+ nthw_field_set_val_flush32(p->mp_fld_dma_msb,
+ (uint32_t)((p->m_stat_dma_physical >> 32) & 0xffffffff));
+ nthw_field_set_val_flush32(p->mp_fld_dma_lsb,
+ (uint32_t)(p->m_stat_dma_physical & 0xffffffff));
+
+ p->mp_timestamp = (uint64_t *)(p->mp_stat_dma_virtual + p->m_nb_counters);
+ NT_LOG(DBG, NTHW,
+ "stat_dma_physical=%" PRIX64 " p_stat_dma_virtual=%" PRIX64
+ " mp_timestamp=%" PRIX64 "", p->m_stat_dma_physical,
+ (uint64_t)p->mp_stat_dma_virtual, (uint64_t)p->mp_timestamp);
+ *p->mp_timestamp = (uint64_t)(int64_t)-1;
+ return 0;
+}
+
+int nthw_stat_trigger(nthw_stat_t *p)
+{
+ int n_toggle_miss = nthw_field_get_updated(p->mp_fld_stat_toggle_missed);
+
+ if (n_toggle_miss)
+ nthw_field_set_flush(p->mp_fld_stat_toggle_missed);
+
+ if (p->mp_timestamp)
+ *p->mp_timestamp = -1; /* Clear old ts */
+
+ nthw_field_update_register(p->mp_fld_dma_ena);
+ nthw_field_set_flush(p->mp_fld_dma_ena);
+
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 2b059d98ff..ddc144dc02 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -46,6 +46,7 @@
#define MOD_SDC (0xd2369530UL)
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
+#define MOD_STA (0x76fae64dUL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7741aa563f..8f196f885f 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -45,6 +45,7 @@
#include "nthw_fpga_reg_defs_sdc.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
+#include "nthw_fpga_reg_defs_sta.h"
#include "nthw_fpga_reg_defs_tx_cpy.h"
#include "nthw_fpga_reg_defs_tx_ins.h"
#include "nthw_fpga_reg_defs_tx_rpl.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
new file mode 100644
index 0000000000..640ffcbc52
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -0,0 +1,40 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_STA_
+#define _NTHW_FPGA_REG_DEFS_STA_
+
+/* STA */
+#define STA_CFG (0xcecaf9f4UL)
+#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
+#define STA_CFG_CNT_FRZ (0x8c27a596UL)
+#define STA_CFG_DMA_ENA (0x940dbacUL)
+#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_HOST_ADR_LSB (0xde569336UL)
+#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
+#define STA_HOST_ADR_MSB (0xdf94f901UL)
+#define STA_HOST_ADR_MSB_MSB (0x114798c8UL)
+#define STA_LOAD_BIN (0x2e842591UL)
+#define STA_LOAD_BIN_BIN (0x1a2b942eUL)
+#define STA_LOAD_BPS_RX_0 (0xbf8f4595UL)
+#define STA_LOAD_BPS_RX_0_BPS (0x41647781UL)
+#define STA_LOAD_BPS_RX_1 (0xc8887503UL)
+#define STA_LOAD_BPS_RX_1_BPS (0x7c045e31UL)
+#define STA_LOAD_BPS_TX_0 (0x9ae41a49UL)
+#define STA_LOAD_BPS_TX_0_BPS (0x870b7e06UL)
+#define STA_LOAD_BPS_TX_1 (0xede32adfUL)
+#define STA_LOAD_BPS_TX_1_BPS (0xba6b57b6UL)
+#define STA_LOAD_PPS_RX_0 (0x811173c3UL)
+#define STA_LOAD_PPS_RX_0_PPS (0xbee573fcUL)
+#define STA_LOAD_PPS_RX_1 (0xf6164355UL)
+#define STA_LOAD_PPS_RX_1_PPS (0x83855a4cUL)
+#define STA_LOAD_PPS_TX_0 (0xa47a2c1fUL)
+#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
+#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
+#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_STATUS (0x91c5c51cUL)
+#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_STA_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 91be894e87..3d02e79691 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -65,6 +65,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+uint64_t rte_tsc_freq;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -88,7 +90,7 @@ static const struct rte_pci_id nthw_pci_id_map[] = {
static const struct sg_ops_s *sg_ops;
-static rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
+rte_spinlock_t hwlock = RTE_SPINLOCK_INITIALIZER;
/*
* Store and get adapter info
@@ -156,6 +158,102 @@ get_pdrv_from_pci(struct rte_pci_addr addr)
return p_drv;
}
+static int dpdk_stats_collect(struct pmd_internals *internals, struct rte_eth_stats *stats)
+{
+ const struct ntnic_filter_ops *ntnic_filter_ops = get_ntnic_filter_ops();
+
+ if (ntnic_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "ntnic_filter_ops uninitialized");
+ return -1;
+ }
+
+ unsigned int i;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t rx_total = 0;
+ uint64_t rx_total_b = 0;
+ uint64_t tx_total = 0;
+ uint64_t tx_total_b = 0;
+ uint64_t tx_err_total = 0;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || !stats || if_index < 0 ||
+ if_index > NUM_ADAPTER_PORTS_MAX) {
+ NT_LOG_DBGX(WRN, NTNIC, "error exit");
+ return -1;
+ }
+
+ /*
+ * Pull the latest port statistic numbers (Rx/Tx pkts and bytes)
+ * Return values are in the "internals->rxq_scg[]" and "internals->txq_scg[]" arrays
+ */
+ ntnic_filter_ops->poll_statistics(internals);
+
+ memset(stats, 0, sizeof(*stats));
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_rx_queues; i++) {
+ stats->q_ipackets[i] = internals->rxq_scg[i].rx_pkts;
+ stats->q_ibytes[i] = internals->rxq_scg[i].rx_bytes;
+ rx_total += stats->q_ipackets[i];
+ rx_total_b += stats->q_ibytes[i];
+ }
+
+ for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < internals->nb_tx_queues; i++) {
+ stats->q_opackets[i] = internals->txq_scg[i].tx_pkts;
+ stats->q_obytes[i] = internals->txq_scg[i].tx_bytes;
+ stats->q_errors[i] = internals->txq_scg[i].err_pkts;
+ tx_total += stats->q_opackets[i];
+ tx_total_b += stats->q_obytes[i];
+ tx_err_total += stats->q_errors[i];
+ }
+
+ stats->imissed = internals->rx_missed;
+ stats->ipackets = rx_total;
+ stats->ibytes = rx_total_b;
+ stats->opackets = tx_total;
+ stats->obytes = tx_total_b;
+ stats->oerrors = tx_err_total;
+
+ return 0;
+}
+
+static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s *p_nt_drv,
+ int n_intf_no)
+{
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ unsigned int i;
+
+ if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /* Rx */
+ for (i = 0; i < internals->nb_rx_queues; i++) {
+ internals->rxq_scg[i].rx_pkts = 0;
+ internals->rxq_scg[i].rx_bytes = 0;
+ internals->rxq_scg[i].err_pkts = 0;
+ }
+
+ internals->rx_missed = 0;
+
+ /* Tx */
+ for (i = 0; i < internals->nb_tx_queues; i++) {
+ internals->txq_scg[i].tx_pkts = 0;
+ internals->txq_scg[i].tx_bytes = 0;
+ internals->txq_scg[i].err_pkts = 0;
+ }
+
+ p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
+
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
static int
eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
{
@@ -194,6 +292,23 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return 0;
}
+static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ dpdk_stats_collect(internals, stats);
+ return 0;
+}
+
+static int eth_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ const int if_index = internals->n_intf_no;
+ dpdk_stats_reset(internals, p_nt_drv, if_index);
+ return 0;
+}
+
static int
eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info)
{
@@ -1453,6 +1568,8 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.dev_set_link_down = eth_dev_set_link_down,
.dev_close = eth_dev_close,
.link_update = eth_link_update,
+ .stats_get = eth_stats_get,
+ .stats_reset = eth_stats_reset,
.dev_infos_get = eth_dev_infos_get,
.fw_version_get = eth_fw_version_get,
.rx_queue_setup = eth_rx_scg_queue_setup,
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index a435b60fb2..ef69064f98 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -8,11 +8,19 @@
#include "create_elements.h"
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
+#include "ntos_drv.h"
#define MAX_RTE_FLOWS 8192
+#define MAX_COLOR_FLOW_STATS 0x400
#define NT_MAX_COLOR_FLOW_STATS 0x400
+#if (MAX_COLOR_FLOW_STATS != NT_MAX_COLOR_FLOW_STATS)
+#error Difference in COLOR_FLOW_STATS. Please synchronize the defines.
+#endif
+
+static struct rte_flow nt_flows[MAX_RTE_FLOWS];
+
rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER;
static struct rte_flow nt_flows[MAX_RTE_FLOWS];
@@ -681,6 +689,9 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
/* Cleanup recorded flows */
nt_flows[flow].used = 0;
nt_flows[flow].caller_id = 0;
+ nt_flows[flow].stat_bytes = 0UL;
+ nt_flows[flow].stat_pkts = 0UL;
+ nt_flows[flow].stat_tcp_flags = 0;
}
}
@@ -720,6 +731,127 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int poll_statistics(struct pmd_internals *internals)
+{
+ int flow;
+ struct drv_s *p_drv = internals->p_drv;
+ struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const int if_index = internals->n_intf_no;
+ uint64_t last_stat_rtc = 0;
+
+ if (!p_nt4ga_stat || if_index < 0 || if_index > NUM_ADAPTER_PORTS_MAX)
+ return -1;
+
+ assert(rte_tsc_freq > 0);
+
+ rte_spinlock_lock(&hwlock);
+
+ uint64_t now_rtc = rte_get_tsc_cycles();
+
+ /*
+ * Check per port max once a second
+ * if more than a second since last stat read, do a new one
+ */
+ if ((now_rtc - internals->last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ return 0;
+ }
+
+ internals->last_stat_rtc = now_rtc;
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+
+ /*
+ * Add the RX statistics increments since last time we polled.
+ * (No difference if physical or virtual port)
+ */
+ internals->rxq_scg[0].rx_pkts += p_nt4ga_stat->a_port_rx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_packets_base[if_index];
+ internals->rxq_scg[0].rx_bytes += p_nt4ga_stat->a_port_rx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_rx_octets_base[if_index];
+ internals->rxq_scg[0].err_pkts += 0;
+ internals->rx_missed += p_nt4ga_stat->a_port_rx_drops_total[if_index] -
+ p_nt4ga_stat->a_port_rx_drops_base[if_index];
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_rx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_packets_total[if_index];
+ p_nt4ga_stat->a_port_rx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_rx_octets_total[if_index];
+ p_nt4ga_stat->a_port_rx_drops_base[if_index] =
+ p_nt4ga_stat->a_port_rx_drops_total[if_index];
+
+ /* Tx (here we must distinguish between physical and virtual ports) */
+ if (internals->type == PORT_TYPE_PHYSICAL) {
+ /* Add the statistics increments since last time we polled */
+ internals->txq_scg[0].tx_pkts += p_nt4ga_stat->a_port_tx_packets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_packets_base[if_index];
+ internals->txq_scg[0].tx_bytes += p_nt4ga_stat->a_port_tx_octets_total[if_index] -
+ p_nt4ga_stat->a_port_tx_octets_base[if_index];
+ internals->txq_scg[0].err_pkts += 0;
+
+ /* Update the increment bases */
+ p_nt4ga_stat->a_port_tx_packets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_packets_total[if_index];
+ p_nt4ga_stat->a_port_tx_octets_base[if_index] =
+ p_nt4ga_stat->a_port_tx_octets_total[if_index];
+ }
+
+ /* Globally only once a second */
+ if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return 0;
+ }
+
+ last_stat_rtc = now_rtc;
+
+ /* All color counter are global, therefore only 1 pmd must update them */
+ const struct color_counters *p_color_counters = p_nt4ga_stat->mp_stat_structs_color;
+ struct color_counters *p_color_counters_base = p_nt4ga_stat->a_stat_structs_color_base;
+ uint64_t color_packets_accumulated, color_bytes_accumulated;
+
+ for (flow = 0; flow < MAX_RTE_FLOWS; flow++) {
+ if (nt_flows[flow].used) {
+ unsigned int color = nt_flows[flow].flow_stat_id;
+
+ if (color < NT_MAX_COLOR_FLOW_STATS) {
+ color_packets_accumulated = p_color_counters[color].color_packets;
+ nt_flows[flow].stat_pkts +=
+ (color_packets_accumulated -
+ p_color_counters_base[color].color_packets);
+
+ nt_flows[flow].stat_tcp_flags |= p_color_counters[color].tcp_flags;
+
+ color_bytes_accumulated = p_color_counters[color].color_bytes;
+ nt_flows[flow].stat_bytes +=
+ (color_bytes_accumulated -
+ p_color_counters_base[color].color_bytes);
+
+ /* Update the counter bases */
+ p_color_counters_base[color].color_packets =
+ color_packets_accumulated;
+ p_color_counters_base[color].color_bytes = color_bytes_accumulated;
+ }
+ }
+ }
+
+ rte_spinlock_unlock(&hwlock);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ return 0;
+}
+
+static const struct ntnic_filter_ops ntnic_filter_ops = {
+ .poll_statistics = poll_statistics,
+};
+
+void ntnic_filter_init(void)
+{
+ register_ntnic_filter_ops(&ntnic_filter_ops);
+}
+
static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 593b56bf5b..355e2032b1 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,21 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+static const struct ntnic_filter_ops *ntnic_filter_ops;
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
+{
+ ntnic_filter_ops = ops;
+}
+
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void)
+{
+ if (ntnic_filter_ops == NULL)
+ ntnic_filter_init();
+
+ return ntnic_filter_ops;
+}
+
static struct link_ops_s *link_100g_ops;
void register_100g_link_ops(struct link_ops_s *ops)
@@ -47,6 +62,21 @@ const struct port_ops *get_port_ops(void)
return port_ops;
}
+static const struct nt4ga_stat_ops *nt4ga_stat_ops;
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops)
+{
+ nt4ga_stat_ops = ops;
+}
+
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void)
+{
+ if (nt4ga_stat_ops == NULL)
+ nt4ga_stat_ops_init();
+
+ return nt4ga_stat_ops;
+}
+
static const struct adapter_ops *adapter_ops;
void register_adapter_ops(const struct adapter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e40ed9b949..30b9afb7d3 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -111,6 +111,14 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+struct ntnic_filter_ops {
+ int (*poll_statistics)(struct pmd_internals *internals);
+};
+
+void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops);
+const struct ntnic_filter_ops *get_ntnic_filter_ops(void);
+void ntnic_filter_init(void);
+
struct link_ops_s {
int (*link_init)(struct adapter_info_s *p_adapter_info, nthw_fpga_t *p_fpga);
};
@@ -175,6 +183,15 @@ void register_port_ops(const struct port_ops *ops);
const struct port_ops *get_port_ops(void);
void port_init(void);
+struct nt4ga_stat_ops {
+ int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+};
+
+void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
+const struct nt4ga_stat_ops *get_nt4ga_stat_ops(void);
+void nt4ga_stat_ops_init(void);
+
struct adapter_ops {
int (*init)(struct adapter_info_s *p_adapter_info);
int (*deinit)(struct adapter_info_s *p_adapter_info);
diff --git a/drivers/net/ntnic/ntutil/nt_util.h b/drivers/net/ntnic/ntutil/nt_util.h
index a482fb43ad..f2eccf3501 100644
--- a/drivers/net/ntnic/ntutil/nt_util.h
+++ b/drivers/net/ntnic/ntutil/nt_util.h
@@ -22,6 +22,7 @@
* The windows size must max be 3 min in order to
* prevent overflow.
*/
+#define PORT_LOAD_WINDOWS_SIZE 2ULL
#define FLM_LOAD_WINDOWS_SIZE 2ULL
#define PCIIDENT_TO_DOMAIN(pci_ident) ((uint16_t)(((unsigned int)(pci_ident) >> 16) & 0xFFFFU))
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 55/80] net/ntnic: add rpf module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (53 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 54/80] net/ntnic: add statistics support Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 56/80] net/ntnic: add statistics poll Serhii Iliushyk
` (24 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
The Receive Port FIFO module controls the small FPGA FIFO
that packets are stored in before they enter the packet processor pipeline.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 25 +++-
drivers/net/ntnic/include/ntnic_stat.h | 2 +
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_rpf.h | 48 +++++++
drivers/net/ntnic/nthw/core/nthw_rpf.c | 119 ++++++++++++++++++
.../net/ntnic/nthw/model/nthw_fpga_model.c | 12 ++
.../net/ntnic/nthw/model/nthw_fpga_model.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_rpf.h | 19 +++
10 files changed, 228 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_rpf.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_rpf.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 0e20f3ea45..f733fd5459 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -11,6 +11,7 @@
#include "nt4ga_adapter.h"
#include "ntnic_nim.h"
#include "flow_filter.h"
+#include "ntnic_stat.h"
#include "ntnic_mod_reg.h"
#define DEFAULT_MAX_BPS_SPEED 100e9
@@ -43,7 +44,7 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
if (!p_nthw_rmc) {
nthw_stat_delete(p_nthw_stat);
- NT_LOG(ERR, NTNIC, "%s: ERROR ", p_adapter_id_str);
+ NT_LOG(ERR, NTNIC, "%s: ERROR rmc allocation", p_adapter_id_str);
return -1;
}
@@ -54,6 +55,22 @@ static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
p_nt4ga_stat->mp_nthw_rmc = NULL;
}
+ if (nthw_rpf_init(NULL, p_fpga, p_adapter_info->adapter_no) == 0) {
+ nthw_rpf_t *p_nthw_rpf = nthw_rpf_new();
+
+ if (!p_nthw_rpf) {
+ nthw_stat_delete(p_nthw_stat);
+ NT_LOG_DBGX(ERR, NTNIC, "%s: ERROR", p_adapter_id_str);
+ return -1;
+ }
+
+ nthw_rpf_init(p_nthw_rpf, p_fpga, p_adapter_info->adapter_no);
+ p_nt4ga_stat->mp_nthw_rpf = p_nthw_rpf;
+
+ } else {
+ p_nt4ga_stat->mp_nthw_rpf = NULL;
+ }
+
p_nt4ga_stat->mp_nthw_stat = p_nthw_stat;
nthw_stat_init(p_nthw_stat, p_fpga, 0);
@@ -77,6 +94,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_block(p_nt4ga_stat->mp_nthw_rmc);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_block(p_nt4ga_stat->mp_nthw_rpf);
+
/* Allocate and map memory for fpga statistics */
{
uint32_t n_stat_size = (uint32_t)(p_nthw_stat->m_nb_counters * sizeof(uint32_t) +
@@ -112,6 +132,9 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
if (p_nt4ga_stat->mp_nthw_rmc)
nthw_rmc_unblock(p_nt4ga_stat->mp_nthw_rmc, false);
+ if (p_nt4ga_stat->mp_nthw_rpf)
+ nthw_rpf_unblock(p_nt4ga_stat->mp_nthw_rpf);
+
p_nt4ga_stat->mp_stat_structs_color =
calloc(p_nthw_stat->m_nb_color_counters, sizeof(struct color_counters));
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 2aee3f8425..ed24a892ec 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -8,6 +8,7 @@
#include "common_adapter_defs.h"
#include "nthw_rmc.h"
+#include "nthw_rpf.h"
#include "nthw_fpga_model.h"
#define NT_MAX_COLOR_FLOW_STATS 0x400
@@ -102,6 +103,7 @@ struct flm_counters_v1 {
struct nt4ga_stat_s {
nthw_stat_t *mp_nthw_stat;
nthw_rmc_t *mp_nthw_rmc;
+ nthw_rpf_t *mp_nthw_rpf;
struct nt_dma_s *p_stat_dma;
uint32_t *p_stat_dma_virtual;
uint32_t n_stat_size;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 216341bb11..ed5a201fd5 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -47,6 +47,7 @@ sources = files(
'nthw/core/nthw_iic.c',
'nthw/core/nthw_mac_pcs.c',
'nthw/core/nthw_pcie3.c',
+ 'nthw/core/nthw_rpf.c',
'nthw/core/nthw_rmc.c',
'nthw/core/nthw_sdc.c',
'nthw/core/nthw_si5340.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
new file mode 100644
index 0000000000..4c6c57ba55
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -0,0 +1,48 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTHW_RPF_HPP_
+#define NTHW_RPF_HPP_
+
+#include "nthw_fpga_model.h"
+#include "pthread.h"
+struct nthw_rpf {
+ nthw_fpga_t *mp_fpga;
+
+ nthw_module_t *m_mod_rpf;
+
+ int mn_instance;
+
+ nthw_register_t *mp_reg_control;
+ nthw_field_t *mp_fld_control_pen;
+ nthw_field_t *mp_fld_control_rpp_en;
+ nthw_field_t *mp_fld_control_st_tgl_en;
+ nthw_field_t *mp_fld_control_keep_alive_en;
+
+ nthw_register_t *mp_ts_sort_prg;
+ nthw_field_t *mp_fld_ts_sort_prg_maturing_delay;
+ nthw_field_t *mp_fld_ts_sort_prg_ts_at_eof;
+
+ int m_default_maturing_delay;
+ bool m_administrative_block; /* used to enforce license expiry */
+
+ pthread_mutex_t rpf_mutex;
+};
+
+typedef struct nthw_rpf nthw_rpf_t;
+typedef struct nthw_rpf nt_rpf;
+
+nthw_rpf_t *nthw_rpf_new(void);
+void nthw_rpf_delete(nthw_rpf_t *p);
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance);
+void nthw_rpf_administrative_block(nthw_rpf_t *p);
+void nthw_rpf_block(nthw_rpf_t *p);
+void nthw_rpf_unblock(nthw_rpf_t *p);
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay);
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p);
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable);
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p);
+
+#endif
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
new file mode 100644
index 0000000000..81c704d01a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -0,0 +1,119 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+#include "nthw_rpf.h"
+
+nthw_rpf_t *nthw_rpf_new(void)
+{
+ nthw_rpf_t *p = malloc(sizeof(nthw_rpf_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_rpf_t));
+
+ return p;
+}
+
+void nthw_rpf_delete(nthw_rpf_t *p)
+{
+ if (p) {
+ memset(p, 0, sizeof(nthw_rpf_t));
+ free(p);
+ }
+}
+
+int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ nthw_module_t *p_mod = nthw_fpga_query_module(p_fpga, MOD_RPF, n_instance);
+
+ if (p == NULL)
+ return p_mod == NULL ? -1 : 0;
+
+ if (p_mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: MOD_RPF %d: no such instance",
+ p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance);
+ return -1;
+ }
+
+ p->m_mod_rpf = p_mod;
+
+ p->mp_fpga = p_fpga;
+
+ p->m_administrative_block = false;
+
+ /* CONTROL */
+ p->mp_reg_control = nthw_module_get_register(p->m_mod_rpf, RPF_CONTROL);
+ p->mp_fld_control_pen = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_PEN);
+ p->mp_fld_control_rpp_en = nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_RPP_EN);
+ p->mp_fld_control_st_tgl_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_ST_TGL_EN);
+ p->mp_fld_control_keep_alive_en =
+ nthw_register_get_field(p->mp_reg_control, RPF_CONTROL_KEEP_ALIVE_EN);
+
+ /* TS_SORT_PRG */
+ p->mp_ts_sort_prg = nthw_module_get_register(p->m_mod_rpf, RPF_TS_SORT_PRG);
+ p->mp_fld_ts_sort_prg_maturing_delay =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_MATURING_DELAY);
+ p->mp_fld_ts_sort_prg_ts_at_eof =
+ nthw_register_get_field(p->mp_ts_sort_prg, RPF_TS_SORT_PRG_TS_AT_EOF);
+ p->m_default_maturing_delay =
+ nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
+
+ /* Initialize mutex */
+ pthread_mutex_init(&p->rpf_mutex, NULL);
+ return 0;
+}
+
+void nthw_rpf_administrative_block(nthw_rpf_t *p)
+{
+ /* block all MAC ports */
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+
+ p->m_administrative_block = true;
+}
+
+void nthw_rpf_block(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+ nthw_field_set_val_flush32(p->mp_fld_control_pen, 0);
+}
+
+void nthw_rpf_unblock(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_reg_control);
+
+ nthw_field_set_val32(p->mp_fld_control_pen, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_rpp_en, ~0U);
+ nthw_field_set_val32(p->mp_fld_control_st_tgl_en, 1);
+ nthw_field_set_val_flush32(p->mp_fld_control_keep_alive_en, 1);
+}
+
+void nthw_rpf_set_maturing_delay(nthw_rpf_t *p, int32_t delay)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_maturing_delay, (uint32_t)delay);
+}
+
+int32_t nthw_rpf_get_maturing_delay(nthw_rpf_t *p)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ /* Maturing delay is a two's complement 18 bit value, so we retrieve it as signed */
+ return nthw_field_get_signed(p->mp_fld_ts_sort_prg_maturing_delay);
+}
+
+void nthw_rpf_set_ts_at_eof(nthw_rpf_t *p, bool enable)
+{
+ nthw_register_update(p->mp_ts_sort_prg);
+ nthw_field_set_val_flush32(p->mp_fld_ts_sort_prg_ts_at_eof, enable);
+}
+
+bool nthw_rpf_get_ts_at_eof(nthw_rpf_t *p)
+{
+ return nthw_field_get_updated(p->mp_fld_ts_sort_prg_ts_at_eof);
+}
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
index 4d495f5b96..9eaaeb550d 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.c
@@ -1050,6 +1050,18 @@ uint32_t nthw_field_get_val32(const nthw_field_t *p)
return val;
}
+int32_t nthw_field_get_signed(const nthw_field_t *p)
+{
+ uint32_t val;
+
+ nthw_field_get_val(p, &val, 1);
+
+ if (val & (1U << nthw_field_get_bit_pos_high(p))) /* check sign */
+ val = val | ~nthw_field_get_mask(p); /* sign extension */
+
+ return (int32_t)val; /* cast to signed value */
+}
+
uint32_t nthw_field_get_updated(const nthw_field_t *p)
{
uint32_t val;
diff --git a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
index 7956f0689e..d4e7ab3edd 100644
--- a/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
+++ b/drivers/net/ntnic/nthw/model/nthw_fpga_model.h
@@ -227,6 +227,7 @@ void nthw_field_get_val(const nthw_field_t *p, uint32_t *p_data, uint32_t len);
void nthw_field_set_val(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
void nthw_field_set_val_flush(const nthw_field_t *p, const uint32_t *p_data, uint32_t len);
uint32_t nthw_field_get_val32(const nthw_field_t *p);
+int32_t nthw_field_get_signed(const nthw_field_t *p);
uint32_t nthw_field_get_updated(const nthw_field_t *p);
void nthw_field_update_register(const nthw_field_t *p);
void nthw_field_flush_register(const nthw_field_t *p);
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index ddc144dc02..03122acaf5 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -41,6 +41,7 @@
#define MOD_RAC (0xae830b42UL)
#define MOD_RMC (0x236444eUL)
#define MOD_RPL (0x6de535c3UL)
+#define MOD_RPF (0x8d30dcddUL)
#define MOD_RPP_LR (0xba7f945cUL)
#define MOD_RST9563 (0x385d6d1dUL)
#define MOD_SDC (0xd2369530UL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 8f196f885f..7067f4b1d0 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -39,6 +39,7 @@
#include "nthw_fpga_reg_defs_qsl.h"
#include "nthw_fpga_reg_defs_rac.h"
#include "nthw_fpga_reg_defs_rmc.h"
+#include "nthw_fpga_reg_defs_rpf.h"
#include "nthw_fpga_reg_defs_rpl.h"
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
new file mode 100644
index 0000000000..72f450b85d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_rpf.h
@@ -0,0 +1,19 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_RPF_
+#define _NTHW_FPGA_REG_DEFS_RPF_
+
+/* RPF */
+#define RPF_CONTROL (0x7a5bdb50UL)
+#define RPF_CONTROL_KEEP_ALIVE_EN (0x80be3ffcUL)
+#define RPF_CONTROL_PEN (0xb23137b8UL)
+#define RPF_CONTROL_RPP_EN (0xdb51f109UL)
+#define RPF_CONTROL_ST_TGL_EN (0x45a6ecfaUL)
+#define RPF_TS_SORT_PRG (0xff1d137eUL)
+#define RPF_TS_SORT_PRG_MATURING_DELAY (0x2a38e127UL)
+#define RPF_TS_SORT_PRG_TS_AT_EOF (0x9f27d433UL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_RPF_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 56/80] net/ntnic: add statistics poll
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (54 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 55/80] net/ntnic: add rpf module Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 57/80] net/ntnic: added flm stat interface Serhii Iliushyk
` (23 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Mechanism which poll statistics module and update values with dma
module.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 343 ++++++++++++++++++
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/include/ntnic_stat.h | 78 ++++
.../net/ntnic/nthw/core/include/nthw_rmc.h | 5 +
drivers/net/ntnic/nthw/core/nthw_rmc.c | 20 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 1 +
drivers/net/ntnic/nthw/stat/nthw_stat.c | 128 +++++++
drivers/net/ntnic/ntnic_ethdev.c | 143 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 +
11 files changed, 723 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 4ed732d9f8..6e3a290a5c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -63,6 +63,7 @@ Features
source only, destination only or both.
- Several RSS hash keys, one for each flow type.
- Default RSS operation with no hash key specification.
+- Port and queue statistics.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 2cace179b3..1b7e4ab3ae 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -163,6 +163,7 @@ New Features
* Added basic handling of the virtual queues.
* Added flow handling support
* Enable virtual queues
+ * Added statistics support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index f733fd5459..3afc5b7853 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -16,6 +16,27 @@
#define DEFAULT_MAX_BPS_SPEED 100e9
+/* Inline timestamp format s pcap 32:32 bits. Convert to nsecs */
+static inline uint64_t timestamp2ns(uint64_t ts)
+{
+ return ((ts) >> 32) * 1000000000 + ((ts) & 0xffffffff);
+}
+
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual);
+
+static int nt4ga_stat_collect(struct adapter_info_s *p_adapter_info, nt4ga_stat_t *p_nt4ga_stat)
+{
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ p_nt4ga_stat->last_timestamp = timestamp2ns(*p_nthw_stat->mp_timestamp);
+ nt4ga_stat_collect_cap_v1_stats(p_adapter_info, p_nt4ga_stat,
+ p_nt4ga_stat->p_stat_dma_virtual);
+
+ return 0;
+}
+
static int nt4ga_stat_init(struct adapter_info_s *p_adapter_info)
{
const char *const p_adapter_id_str = p_adapter_info->mp_adapter_id_str;
@@ -203,9 +224,331 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return 0;
}
+/* Called with stat mutex locked */
+static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat,
+ uint32_t *p_stat_dma_virtual)
+{
+ (void)p_adapter_info;
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL)
+ return -1;
+
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+
+ const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
+ const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
+ int c, h, p;
+
+ if (!p_nthw_stat || !p_nt4ga_stat)
+ return -1;
+
+ if (p_nthw_stat->mn_stat_layout_version < 6) {
+ NT_LOG(ERR, NTNIC, "HW STA module version not supported");
+ return -1;
+ }
+
+ /* RX ports */
+ for (c = 0; c < p_nthw_stat->m_nb_color_counters / 2; c++) {
+ p_nt4ga_stat->mp_stat_structs_color[c].color_packets += p_stat_dma_virtual[c * 2];
+ p_nt4ga_stat->mp_stat_structs_color[c].color_bytes +=
+ p_stat_dma_virtual[c * 2 + 1];
+ }
+
+ /* Move to Host buffer counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_color_counters;
+
+ for (h = 0; h < p_nthw_stat->m_nb_rx_host_buffers; h++) {
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_packets += p_stat_dma_virtual[h * 8];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_packets += p_stat_dma_virtual[h * 8 + 1];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_packets += p_stat_dma_virtual[h * 8 + 2];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_packets +=
+ p_stat_dma_virtual[h * 8 + 3];
+ p_nt4ga_stat->mp_stat_structs_hb[h].flush_bytes += p_stat_dma_virtual[h * 8 + 4];
+ p_nt4ga_stat->mp_stat_structs_hb[h].drop_bytes += p_stat_dma_virtual[h * 8 + 5];
+ p_nt4ga_stat->mp_stat_structs_hb[h].fwd_bytes += p_stat_dma_virtual[h * 8 + 6];
+ p_nt4ga_stat->mp_stat_structs_hb[h].dbs_drop_bytes +=
+ p_stat_dma_virtual[h * 8 + 7];
+ }
+
+ /* Move to Rx Port counters */
+ p_stat_dma_virtual += p_nthw_stat->m_nb_rx_hb_counters;
+
+ /* RX ports */
+ for (p = 0; p < n_rx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 23];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].duplicate +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 24];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_ip_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 25];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_udp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 26];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_tcp_chksum_error +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 27];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_giant_undersize +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 28];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_baby_giant +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 29];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_not_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 30];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 31];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 32];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 33];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 34];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 35];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 36];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_isl_vlan_mpls +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 37];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_no_filter +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 43];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dedup_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 44];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_filter_drop +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 45];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_overflow +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 46];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].octets_dbs_drop +=
+ p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 47]
+ : 0;
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 48];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_first_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 49];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 50];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_mid_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 51];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 52];
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].ipft_last_not_hit +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 53];
+
+ /* Rx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 22] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 38] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 39] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 40] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 41] +
+ (p_nthw_stat->m_dbs_present
+ ? p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 42]
+ : 0);
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_rx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_rx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_rx_port_counters + 0];
+ p_nt4ga_stat->a_port_rx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_rx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Move to Tx Port counters */
+ p_stat_dma_virtual += n_rx_ports * p_nthw_stat->m_nb_rx_port_counters;
+
+ for (p = 0; p < n_tx_ports; p++) {
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].broadcast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 1];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].multicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 2];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].unicast_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 3];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_alignment +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 4];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_code_violation +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 5];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_crc +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 6];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].undersize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].oversize_pkts +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].fragments +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_not_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].jabbers_truncated +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_64_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_65_to_127_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_128_to_255_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_256_to_511_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_512_to_1023_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1024_to_1518_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_1519_to_2047_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_2048_to_4095_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_4096_to_8191_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_8192_to_max_octets +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].mac_drop_events +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts_lr +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 23];
+
+ /* Tx totals */
+ uint64_t new_drop_events_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 22];
+
+ uint64_t new_packets_sum =
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 7] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 8] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 9] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 10] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 11] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 12] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 13] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 14] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 15] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 16] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 17] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 18] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 19] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 20] +
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 21];
+
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].drop_events += new_drop_events_sum;
+ p_nt4ga_stat->cap.mp_stat_structs_port_tx[p].pkts += new_packets_sum;
+
+ p_nt4ga_stat->a_port_tx_octets_total[p] +=
+ p_stat_dma_virtual[p * p_nthw_stat->m_nb_tx_port_counters + 0];
+ p_nt4ga_stat->a_port_tx_packets_total[p] += new_packets_sum;
+ p_nt4ga_stat->a_port_tx_drops_total[p] += new_drop_events_sum;
+ }
+
+ /* Update and get port load counters */
+ for (p = 0; p < n_rx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_rx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].rx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ for (p = 0; p < n_tx_ports; p++) {
+ uint32_t val;
+ nthw_stat_get_load_bps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_bps =
+ (uint64_t)(((__uint128_t)val * 32ULL * 64ULL * 8ULL) /
+ PORT_LOAD_WINDOWS_SIZE);
+ nthw_stat_get_load_pps_tx(p_nthw_stat, p, &val);
+ p_nt4ga_stat->mp_port_load[p].tx_pps =
+ (uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
+ }
+
+ return 0;
+}
+
static struct nt4ga_stat_ops ops = {
.nt4ga_stat_init = nt4ga_stat_init,
.nt4ga_stat_setup = nt4ga_stat_setup,
+ .nt4ga_stat_collect = nt4ga_stat_collect
};
void nt4ga_stat_ops_init(void)
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 1135e9a539..38e4d0ca35 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -16,6 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
+ rte_thread_t stat_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index ed24a892ec..0735dbc085 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -85,16 +85,87 @@ struct color_counters {
};
struct host_buffer_counters {
+ uint64_t flush_packets;
+ uint64_t drop_packets;
+ uint64_t fwd_packets;
+ uint64_t dbs_drop_packets;
+ uint64_t flush_bytes;
+ uint64_t drop_bytes;
+ uint64_t fwd_bytes;
+ uint64_t dbs_drop_bytes;
};
struct port_load_counters {
+ uint64_t rx_pps;
uint64_t rx_pps_max;
+ uint64_t tx_pps;
uint64_t tx_pps_max;
+ uint64_t rx_bps;
uint64_t rx_bps_max;
+ uint64_t tx_bps;
uint64_t tx_bps_max;
};
struct port_counters_v2 {
+ /* Rx/Tx common port counters */
+ uint64_t drop_events;
+ uint64_t pkts;
+ /* FPGA counters */
+ uint64_t octets;
+ uint64_t broadcast_pkts;
+ uint64_t multicast_pkts;
+ uint64_t unicast_pkts;
+ uint64_t pkts_alignment;
+ uint64_t pkts_code_violation;
+ uint64_t pkts_crc;
+ uint64_t undersize_pkts;
+ uint64_t oversize_pkts;
+ uint64_t fragments;
+ uint64_t jabbers_not_truncated;
+ uint64_t jabbers_truncated;
+ uint64_t pkts_64_octets;
+ uint64_t pkts_65_to_127_octets;
+ uint64_t pkts_128_to_255_octets;
+ uint64_t pkts_256_to_511_octets;
+ uint64_t pkts_512_to_1023_octets;
+ uint64_t pkts_1024_to_1518_octets;
+ uint64_t pkts_1519_to_2047_octets;
+ uint64_t pkts_2048_to_4095_octets;
+ uint64_t pkts_4096_to_8191_octets;
+ uint64_t pkts_8192_to_max_octets;
+ uint64_t mac_drop_events;
+ uint64_t pkts_lr;
+ /* Rx only port counters */
+ uint64_t duplicate;
+ uint64_t pkts_ip_chksum_error;
+ uint64_t pkts_udp_chksum_error;
+ uint64_t pkts_tcp_chksum_error;
+ uint64_t pkts_giant_undersize;
+ uint64_t pkts_baby_giant;
+ uint64_t pkts_not_isl_vlan_mpls;
+ uint64_t pkts_isl;
+ uint64_t pkts_vlan;
+ uint64_t pkts_isl_vlan;
+ uint64_t pkts_mpls;
+ uint64_t pkts_isl_mpls;
+ uint64_t pkts_vlan_mpls;
+ uint64_t pkts_isl_vlan_mpls;
+ uint64_t pkts_no_filter;
+ uint64_t pkts_dedup_drop;
+ uint64_t pkts_filter_drop;
+ uint64_t pkts_overflow;
+ uint64_t pkts_dbs_drop;
+ uint64_t octets_no_filter;
+ uint64_t octets_dedup_drop;
+ uint64_t octets_filter_drop;
+ uint64_t octets_overflow;
+ uint64_t octets_dbs_drop;
+ uint64_t ipft_first_hit;
+ uint64_t ipft_first_not_hit;
+ uint64_t ipft_mid_hit;
+ uint64_t ipft_mid_not_hit;
+ uint64_t ipft_last_hit;
+ uint64_t ipft_last_not_hit;
};
struct flm_counters_v1 {
@@ -147,6 +218,8 @@ struct nt4ga_stat_s {
uint64_t a_port_tx_packets_base[NUM_ADAPTER_PORTS_MAX];
uint64_t a_port_tx_packets_total[NUM_ADAPTER_PORTS_MAX];
+
+ uint64_t a_port_tx_drops_total[NUM_ADAPTER_PORTS_MAX];
};
typedef struct nt4ga_stat_s nt4ga_stat_t;
@@ -159,4 +232,9 @@ int nthw_stat_set_dma_address(nthw_stat_t *p, uint64_t stat_dma_physical,
uint32_t *p_stat_dma_virtual);
int nthw_stat_trigger(nthw_stat_t *p);
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val);
+
#endif /* NTNIC_STAT_H_ */
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
index b239752674..9c40804cd9 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rmc.h
@@ -47,4 +47,9 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance);
void nthw_rmc_block(nthw_rmc_t *p);
void nthw_rmc_unblock(nthw_rmc_t *p, bool b_is_secondary);
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p);
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p);
+
#endif /* NTHW_RMC_H_ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_rmc.c b/drivers/net/ntnic/nthw/core/nthw_rmc.c
index 748519aeb4..570a179fc8 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rmc.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rmc.c
@@ -77,6 +77,26 @@ int nthw_rmc_init(nthw_rmc_t *p, nthw_fpga_t *p_fpga, int n_instance)
return 0;
}
+uint32_t nthw_rmc_get_status_sf_ram_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_sf_ram_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_status_descr_fifo_of(nthw_rmc_t *p)
+{
+ return (p->mp_reg_status) ? nthw_field_get_updated(p->mp_fld_descr_fifo_of) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_dbg_merge(nthw_rmc_t *p)
+{
+ return (p->mp_reg_dbg) ? nthw_field_get_updated(p->mp_fld_dbg_merge) : 0xffffffff;
+}
+
+uint32_t nthw_rmc_get_mac_if_err(nthw_rmc_t *p)
+{
+ return (p->mp_reg_mac_if) ? nthw_field_get_updated(p->mp_fld_mac_if_err) : 0xffffffff;
+}
+
void nthw_rmc_block(nthw_rmc_t *p)
{
/* BLOCK_STATT(0)=1 BLOCK_KEEPA(1)=1 BLOCK_MAC_PORT(8:11)=~0 */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 4847b2de99..84ab811369 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_nic_setup.h"
+#include "ntlog.h"
#include "ntnic_mod_reg.h"
#include "flow_api.h"
diff --git a/drivers/net/ntnic/nthw/stat/nthw_stat.c b/drivers/net/ntnic/nthw/stat/nthw_stat.c
index 6adcd2e090..078eec5e1f 100644
--- a/drivers/net/ntnic/nthw/stat/nthw_stat.c
+++ b/drivers/net/ntnic/nthw/stat/nthw_stat.c
@@ -368,3 +368,131 @@ int nthw_stat_trigger(nthw_stat_t *p)
return 0;
}
+
+int nthw_stat_get_load_bps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_bps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_bps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_bps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_bps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_rx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_rx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_rx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_rx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
+
+int nthw_stat_get_load_pps_tx(nthw_stat_t *p, uint8_t port, uint32_t *val)
+{
+ switch (port) {
+ case 0:
+ if (p->mp_fld_load_pps_tx0) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx0);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ case 1:
+ if (p->mp_fld_load_pps_tx1) {
+ *val = nthw_field_get_updated(p->mp_fld_load_pps_tx1);
+ return 0;
+
+ } else {
+ *val = 0;
+ return -1;
+ }
+
+ break;
+
+ default:
+ return -1;
+ }
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 3d02e79691..8a9ca2c03d 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -4,6 +4,9 @@
*/
#include <stdint.h>
+#include <stdarg.h>
+
+#include <signal.h>
#include <rte_eal.h>
#include <rte_dev.h>
@@ -25,6 +28,7 @@
#include "nt_util.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
+#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
#define THREAD_CTRL_CREATE(a, b, c, d) rte_thread_create_internal_control(a, b, c, d)
#define THREAD_JOIN(a) rte_thread_join(a, NULL)
#define THREAD_FUNC static uint32_t
@@ -67,6 +71,9 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
uint64_t rte_tsc_freq;
+static void (*previous_handler)(int sig);
+static rte_thread_t shutdown_tid;
+
int kill_pmd;
#define ETH_DEV_NTNIC_HELP_ARG "help"
@@ -1407,6 +1414,7 @@ drv_deinit(struct drv_s *p_drv)
/* stop statistics threads */
p_drv->ntdrv.b_shutdown = true;
+ THREAD_JOIN(p_nt_drv->stat_thread);
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
@@ -1626,6 +1634,87 @@ THREAD_FUNC adapter_flm_update_thread_fn(void *context)
return THREAD_RETURN;
}
+/*
+ * Adapter stat thread
+ */
+THREAD_FUNC adapter_stat_thread_fn(void *context)
+{
+ const struct nt4ga_stat_ops *nt4ga_stat_ops = get_nt4ga_stat_ops();
+
+ if (nt4ga_stat_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "Statistics module uninitialized");
+ return THREAD_RETURN;
+ }
+
+ struct drv_s *p_drv = context;
+
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ const char *const p_adapter_id_str = p_nt_drv->adapter_info.mp_adapter_id_str;
+ (void)p_adapter_id_str;
+
+ if (!p_nthw_stat)
+ return THREAD_RETURN;
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: begin", p_adapter_id_str);
+
+ assert(p_nthw_stat);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ nt_os_wait_usec(10 * 1000);
+
+ nthw_stat_trigger(p_nthw_stat);
+
+ uint32_t loop = 0;
+
+ while ((!p_drv->ntdrv.b_shutdown) &&
+ (*p_nthw_stat->mp_timestamp == (uint64_t)-1)) {
+ nt_os_wait_usec(1 * 100);
+
+ if (rte_log_get_level(nt_log_ntnic) == RTE_LOG_DEBUG &&
+ (++loop & 0x3fff) == 0) {
+ if (p_nt4ga_stat->mp_nthw_rpf) {
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+
+ } else if (p_nt4ga_stat->mp_nthw_rmc) {
+ uint32_t sf_ram_of =
+ nthw_rmc_get_status_sf_ram_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+ uint32_t descr_fifo_of =
+ nthw_rmc_get_status_descr_fifo_of(p_nt4ga_stat
+ ->mp_nthw_rmc);
+
+ uint32_t dbg_merge =
+ nthw_rmc_get_dbg_merge(p_nt4ga_stat->mp_nthw_rmc);
+ uint32_t mac_if_err =
+ nthw_rmc_get_mac_if_err(p_nt4ga_stat->mp_nthw_rmc);
+
+ NT_LOG(ERR, NTNIC, "Statistics DMA frozen");
+ NT_LOG(ERR, NTNIC, "SF RAM Overflow : %08x",
+ sf_ram_of);
+ NT_LOG(ERR, NTNIC, "Descr Fifo Overflow : %08x",
+ descr_fifo_of);
+ NT_LOG(ERR, NTNIC, "DBG Merge : %08x",
+ dbg_merge);
+ NT_LOG(ERR, NTNIC, "MAC If Errors : %08x",
+ mac_if_err);
+ }
+ }
+ }
+
+ /* Check then collect */
+ {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ }
+ }
+
+ NT_LOG_DBGX(DBG, NTNIC, "%s: end", p_adapter_id_str);
+ return THREAD_RETURN;
+}
+
static int
nthw_pci_dev_init(struct rte_pci_device *pci_dev)
{
@@ -1883,6 +1972,16 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
+ pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
+ (void *)p_drv);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+
n_phy_ports = fpga_info->n_phy_ports;
for (int n_intf_no = 0; n_intf_no < n_phy_ports; n_intf_no++) {
@@ -2073,6 +2172,48 @@ nthw_pci_dev_deinit(struct rte_eth_dev *eth_dev __rte_unused)
return 0;
}
+static void signal_handler_func_int(int sig)
+{
+ if (sig != SIGINT) {
+ signal(sig, previous_handler);
+ raise(sig);
+ return;
+ }
+
+ kill_pmd = 1;
+}
+
+THREAD_FUNC shutdown_thread(void *arg __rte_unused)
+{
+ while (!kill_pmd)
+ nt_os_wait_usec(100 * 1000);
+
+ NT_LOG_DBGX(DBG, NTNIC, "Shutting down because of ctrl+C");
+
+ signal(SIGINT, previous_handler);
+ raise(SIGINT);
+
+ return THREAD_RETURN;
+}
+
+static int init_shutdown(void)
+{
+ NT_LOG(DBG, NTNIC, "Starting shutdown handler");
+ kill_pmd = 0;
+ previous_handler = signal(SIGINT, signal_handler_func_int);
+ THREAD_CREATE(&shutdown_tid, shutdown_thread, NULL);
+
+ /*
+ * 1 time calculation of 1 sec stat update rtc cycles to prevent stat poll
+ * flooding by OVS from multiple virtual port threads - no need to be precise
+ */
+ uint64_t now_rtc = rte_get_tsc_cycles();
+ nt_os_wait_usec(10 * 1000);
+ rte_tsc_freq = 100 * (rte_get_tsc_cycles() - now_rtc);
+
+ return 0;
+}
+
static int
nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
@@ -2115,6 +2256,8 @@ nthw_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
ret = nthw_pci_dev_init(pci_dev);
+ init_shutdown();
+
NT_LOG_DBGX(DBG, NTNIC, "leave: ret=%d", ret);
return ret;
}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 30b9afb7d3..8b825d8c48 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -186,6 +186,8 @@ void port_init(void);
struct nt4ga_stat_ops {
int (*nt4ga_stat_init)(struct adapter_info_s *p_adapter_info);
int (*nt4ga_stat_setup)(struct adapter_info_s *p_adapter_info);
+ int (*nt4ga_stat_collect)(struct adapter_info_s *p_adapter_info,
+ nt4ga_stat_t *p_nt4ga_stat);
};
void register_nt4ga_stat_ops(const struct nt4ga_stat_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 57/80] net/ntnic: added flm stat interface
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (55 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 56/80] net/ntnic: add statistics poll Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 58/80] net/ntnic: add TSM module Serhii Iliushyk
` (22 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flm stat module interface was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 2 ++
drivers/net/ntnic/include/flow_filter.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 2 ++
4 files changed, 16 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 4a1525f237..ed96f77bc0 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -233,4 +233,6 @@ int flow_nic_set_hasher(struct flow_nic_dev *ndev, int hsh_idx, enum flow_nic_ha
int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif
diff --git a/drivers/net/ntnic/include/flow_filter.h b/drivers/net/ntnic/include/flow_filter.h
index d204c0d882..01777f8c9f 100644
--- a/drivers/net/ntnic/include/flow_filter.h
+++ b/drivers/net/ntnic/include/flow_filter.h
@@ -11,5 +11,6 @@
int flow_filter_init(nthw_fpga_t *p_fpga, struct flow_nic_dev **p_flow_device, int adapter_no);
int flow_filter_done(struct flow_nic_dev *dev);
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
#endif /* __FLOW_FILTER_HPP__ */
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 84ab811369..d5a4b0b10c 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1014,6 +1014,16 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ (void)ndev;
+ (void)data;
+ (void)size;
+
+ NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
+ return -1;
+}
+
static const struct flow_filter_ops ops = {
.flow_filter_init = flow_filter_init,
.flow_filter_done = flow_filter_done,
@@ -1028,6 +1038,7 @@ static const struct flow_filter_ops ops = {
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
+ .flow_get_flm_stats = flow_get_flm_stats,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8b825d8c48..8703d478b6 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -336,6 +336,8 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
/*
* Other
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 58/80] net/ntnic: add TSM module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (56 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 57/80] net/ntnic: added flm stat interface Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 59/80] net/ntnic: add STA module Serhii Iliushyk
` (21 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
TSM module which operate with timers
in the physical nic was added.
Necessary defines and implementation were added.
The Time Stamp Module controls every aspect of packet timestamping,
including time synchronization, time stamp format, PTP protocol, etc.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../net/ntnic/nthw/core/include/nthw_tsm.h | 56 ++++++
drivers/net/ntnic/nthw/core/nthw_fpga.c | 47 +++++
drivers/net/ntnic/nthw/core/nthw_tsm.c | 167 ++++++++++++++++++
.../ntnic/nthw/supported/nthw_fpga_mod_defs.h | 1 +
.../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 28 +++
7 files changed, 301 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/core/include/nthw_tsm.h
create mode 100644 drivers/net/ntnic/nthw/core/nthw_tsm.c
create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index ed5a201fd5..a6c4fec0be 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -41,6 +41,7 @@ sources = files(
'nthw/core/nt200a0x/reset/nthw_fpga_rst_nt200a0x.c',
'nthw/core/nthw_fpga.c',
'nthw/core/nthw_gmf.c',
+ 'nthw/core/nthw_tsm.c',
'nthw/core/nthw_gpio_phy.c',
'nthw/core/nthw_hif.c',
'nthw/core/nthw_i2cm.c',
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_tsm.h b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
new file mode 100644
index 0000000000..0a3bcdcaf5
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/include/nthw_tsm.h
@@ -0,0 +1,56 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef __NTHW_TSM_H__
+#define __NTHW_TSM_H__
+
+#include "stdint.h"
+
+#include "nthw_fpga_model.h"
+
+struct nthw_tsm {
+ nthw_fpga_t *mp_fpga;
+ nthw_module_t *mp_mod_tsm;
+ int mn_instance;
+
+ nthw_field_t *mp_fld_config_ts_format;
+
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t0;
+ nthw_field_t *mp_fld_timer_ctrl_timer_en_t1;
+
+ nthw_field_t *mp_fld_timer_timer_t0_max_count;
+
+ nthw_field_t *mp_fld_timer_timer_t1_max_count;
+
+ nthw_register_t *mp_reg_ts_lo;
+ nthw_field_t *mp_fld_ts_lo;
+
+ nthw_register_t *mp_reg_ts_hi;
+ nthw_field_t *mp_fld_ts_hi;
+
+ nthw_register_t *mp_reg_time_lo;
+ nthw_field_t *mp_fld_time_lo;
+
+ nthw_register_t *mp_reg_time_hi;
+ nthw_field_t *mp_fld_time_hi;
+};
+
+typedef struct nthw_tsm nthw_tsm_t;
+typedef struct nthw_tsm nthw_tsm;
+
+nthw_tsm_t *nthw_tsm_new(void);
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance);
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts);
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time);
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable);
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val);
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val);
+
+#endif /* __NTHW_TSM_H__ */
diff --git a/drivers/net/ntnic/nthw/core/nthw_fpga.c b/drivers/net/ntnic/nthw/core/nthw_fpga.c
index 9448c29de1..ca69a9d5b1 100644
--- a/drivers/net/ntnic/nthw/core/nthw_fpga.c
+++ b/drivers/net/ntnic/nthw/core/nthw_fpga.c
@@ -13,6 +13,8 @@
#include "nthw_fpga_instances.h"
#include "nthw_fpga_mod_str_map.h"
+#include "nthw_tsm.h"
+
#include <arpa/inet.h>
int nthw_fpga_get_param_info(struct fpga_info_s *p_fpga_info, nthw_fpga_t *p_fpga)
@@ -179,6 +181,7 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
nthw_hif_t *p_nthw_hif = NULL;
nthw_pcie3_t *p_nthw_pcie3 = NULL;
nthw_rac_t *p_nthw_rac = NULL;
+ nthw_tsm_t *p_nthw_tsm = NULL;
mcu_info_t *p_mcu_info = &p_fpga_info->mcu_info;
uint64_t n_fpga_ident = 0;
@@ -331,6 +334,50 @@ int nthw_fpga_init(struct fpga_info_s *p_fpga_info)
p_fpga_info->mp_nthw_hif = p_nthw_hif;
+ p_nthw_tsm = nthw_tsm_new();
+
+ if (p_nthw_tsm) {
+ nthw_tsm_init(p_nthw_tsm, p_fpga, 0);
+
+ nthw_tsm_set_config_ts_format(p_nthw_tsm, 1); /* 1 = TSM: TS format native */
+
+ /* Timer T0 - stat toggle timer */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t0_max_count(p_nthw_tsm, 50 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t0_enable(p_nthw_tsm, true);
+
+ /* Timer T1 - keep alive timer */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, false);
+ nthw_tsm_set_timer_t1_max_count(p_nthw_tsm, 100 * 1000 * 1000); /* ns */
+ nthw_tsm_set_timer_t1_enable(p_nthw_tsm, true);
+ }
+
+ p_fpga_info->mp_nthw_tsm = p_nthw_tsm;
+
+ /* TSM sample triggering: test validation... */
+#if defined(DEBUG) && (1)
+ {
+ uint64_t n_time, n_ts;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (p_nthw_hif)
+ nthw_hif_trigger_sample_time(p_nthw_hif);
+
+ else if (p_nthw_pcie3)
+ nthw_pcie3_trigger_sample_time(p_nthw_pcie3);
+
+ nthw_tsm_get_time(p_nthw_tsm, &n_time);
+ nthw_tsm_get_ts(p_nthw_tsm, &n_ts);
+
+ NT_LOG(DBG, NTHW, "%s: TSM time: %016" PRIX64 " %016" PRIX64 "\n",
+ p_adapter_id_str, n_time, n_ts);
+
+ nt_os_wait_usec(1000);
+ }
+ }
+#endif
+
return res;
}
diff --git a/drivers/net/ntnic/nthw/core/nthw_tsm.c b/drivers/net/ntnic/nthw/core/nthw_tsm.c
new file mode 100644
index 0000000000..b88dcb9b0b
--- /dev/null
+++ b/drivers/net/ntnic/nthw/core/nthw_tsm.c
@@ -0,0 +1,167 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include "ntlog.h"
+
+#include "nthw_drv.h"
+#include "nthw_register.h"
+
+#include "nthw_tsm.h"
+
+nthw_tsm_t *nthw_tsm_new(void)
+{
+ nthw_tsm_t *p = malloc(sizeof(nthw_tsm_t));
+
+ if (p)
+ memset(p, 0, sizeof(nthw_tsm_t));
+
+ return p;
+}
+
+int nthw_tsm_init(nthw_tsm_t *p, nthw_fpga_t *p_fpga, int n_instance)
+{
+ const char *const p_adapter_id_str = p_fpga->p_fpga_info->mp_adapter_id_str;
+ nthw_module_t *mod = nthw_fpga_query_module(p_fpga, MOD_TSM, n_instance);
+
+ if (p == NULL)
+ return mod == NULL ? -1 : 0;
+
+ if (mod == NULL) {
+ NT_LOG(ERR, NTHW, "%s: TSM %d: no such instance", p_adapter_id_str, n_instance);
+ return -1;
+ }
+
+ p->mp_fpga = p_fpga;
+ p->mn_instance = n_instance;
+ p->mp_mod_tsm = mod;
+
+ {
+ nthw_register_t *p_reg;
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_CONFIG);
+ p->mp_fld_config_ts_format = nthw_register_get_field(p_reg, TSM_CONFIG_TS_FORMAT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_CTRL);
+ p->mp_fld_timer_ctrl_timer_en_t0 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T0);
+ p->mp_fld_timer_ctrl_timer_en_t1 =
+ nthw_register_get_field(p_reg, TSM_TIMER_CTRL_TIMER_EN_T1);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T0);
+ p->mp_fld_timer_timer_t0_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T0_MAX_COUNT);
+
+ p_reg = nthw_module_get_register(p->mp_mod_tsm, TSM_TIMER_T1);
+ p->mp_fld_timer_timer_t1_max_count =
+ nthw_register_get_field(p_reg, TSM_TIMER_T1_MAX_COUNT);
+
+ p->mp_reg_time_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_LO);
+ p_reg = p->mp_reg_time_lo;
+ p->mp_fld_time_lo = nthw_register_get_field(p_reg, TSM_TIME_LO_NS);
+
+ p->mp_reg_time_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TIME_HI);
+ p_reg = p->mp_reg_time_hi;
+ p->mp_fld_time_hi = nthw_register_get_field(p_reg, TSM_TIME_HI_SEC);
+
+ p->mp_reg_ts_lo = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_LO);
+ p_reg = p->mp_reg_ts_lo;
+ p->mp_fld_ts_lo = nthw_register_get_field(p_reg, TSM_TS_LO_TIME);
+
+ p->mp_reg_ts_hi = nthw_module_get_register(p->mp_mod_tsm, TSM_TS_HI);
+ p_reg = p->mp_reg_ts_hi;
+ p->mp_fld_ts_hi = nthw_register_get_field(p_reg, TSM_TS_HI_TIME);
+ }
+ return 0;
+}
+
+int nthw_tsm_get_ts(nthw_tsm_t *p, uint64_t *p_ts)
+{
+ uint32_t n_ts_lo, n_ts_hi;
+ uint64_t val;
+
+ if (!p_ts)
+ return -1;
+
+ n_ts_lo = nthw_field_get_updated(p->mp_fld_ts_lo);
+ n_ts_hi = nthw_field_get_updated(p->mp_fld_ts_hi);
+
+ val = ((((uint64_t)n_ts_hi) << 32UL) | n_ts_lo);
+
+ if (p_ts)
+ *p_ts = val;
+
+ return 0;
+}
+
+int nthw_tsm_get_time(nthw_tsm_t *p, uint64_t *p_time)
+{
+ uint32_t n_time_lo, n_time_hi;
+ uint64_t val;
+
+ if (!p_time)
+ return -1;
+
+ n_time_lo = nthw_field_get_updated(p->mp_fld_time_lo);
+ n_time_hi = nthw_field_get_updated(p->mp_fld_time_hi);
+
+ val = ((((uint64_t)n_time_hi) << 32UL) | n_time_lo);
+
+ if (p_time)
+ *p_time = val;
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t0);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t0_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T0 - stat toggle timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t0_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t0_max_count,
+ n_timer_val); /* ns (50*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_enable(nthw_tsm_t *p, bool b_enable)
+{
+ nthw_field_update_register(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ if (b_enable)
+ nthw_field_set_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ else
+ nthw_field_clr_flush(p->mp_fld_timer_ctrl_timer_en_t1);
+
+ return 0;
+}
+
+int nthw_tsm_set_timer_t1_max_count(nthw_tsm_t *p, uint32_t n_timer_val)
+{
+ /* Timer T1 - keep alive timer */
+ nthw_field_update_register(p->mp_fld_timer_timer_t1_max_count);
+ nthw_field_set_val_flush32(p->mp_fld_timer_timer_t1_max_count,
+ n_timer_val); /* ns (100*1000*1000) */
+ return 0;
+}
+
+int nthw_tsm_set_config_ts_format(nthw_tsm_t *p, uint32_t n_val)
+{
+ nthw_field_update_register(p->mp_fld_config_ts_format);
+ /* 0x1: Native - 10ns units, start date: 1970-01-01. */
+ nthw_field_set_val_flush32(p->mp_fld_config_ts_format, n_val);
+ return 0;
+}
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
index 03122acaf5..e6ed9e714b 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_defs.h
@@ -48,6 +48,7 @@
#define MOD_SLC (0x1aef1f38UL)
#define MOD_SLC_LR (0x969fc50bUL)
#define MOD_STA (0x76fae64dUL)
+#define MOD_TSM (0x35422a24UL)
#define MOD_TX_CPY (0x60acf217UL)
#define MOD_TX_INS (0x59afa100UL)
#define MOD_TX_RPL (0x1095dfbbUL)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
index 7067f4b1d0..4d299c6aa8 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h
@@ -44,6 +44,7 @@
#include "nthw_fpga_reg_defs_rpp_lr.h"
#include "nthw_fpga_reg_defs_rst9563.h"
#include "nthw_fpga_reg_defs_sdc.h"
+#include "nthw_fpga_reg_defs_tsm.h"
#include "nthw_fpga_reg_defs_slc.h"
#include "nthw_fpga_reg_defs_slc_lr.h"
#include "nthw_fpga_reg_defs_sta.h"
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
new file mode 100644
index 0000000000..a087850aa4
--- /dev/null
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -0,0 +1,28 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _NTHW_FPGA_REG_DEFS_TSM_
+#define _NTHW_FPGA_REG_DEFS_TSM_
+
+/* TSM */
+#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_TIMER_CTRL (0x648da051UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
+#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
+#define TSM_TIMER_T0 (0x417217a5UL)
+#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
+#define TSM_TIMER_T1 (0x36752733UL)
+#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HI (0x175acea1UL)
+#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
+#define TSM_TIME_LO (0x9a55ae90UL)
+#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TS_HI (0xccfe9e5eUL)
+#define TSM_TS_HI_TIME (0xc23fed30UL)
+#define TSM_TS_LO (0x41f1fe6fUL)
+#define TSM_TS_LO_TIME (0xe0292a3eUL)
+
+#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 59/80] net/ntnic: add STA module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (57 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 58/80] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 60/80] net/ntnic: add TSM module Serhii Iliushyk
` (20 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with STA module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 92 ++++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_sta.h | 8 ++
3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index a3d9f94fc6..efdb084cd6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2486,6 +2486,95 @@ static nthw_fpga_register_init_s slc_registers[] = {
{ SLC_RCP_DATA, 1, 36, NTHW_FPGA_REG_TYPE_WO, 0, 7, slc_rcp_data_fields },
};
+static nthw_fpga_field_init_s sta_byte_fields[] = {
+ { STA_BYTE_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_cfg_fields[] = {
+ { STA_CFG_CNT_CLEAR, 1, 1, 0 },
+ { STA_CFG_DMA_ENA, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_cv_err_fields[] = {
+ { STA_CV_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_fcs_err_fields[] = {
+ { STA_FCS_ERR_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_lsb_fields[] = {
+ { STA_HOST_ADR_LSB_LSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_host_adr_msb_fields[] = {
+ { STA_HOST_ADR_MSB_MSB, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s sta_load_bin_fields[] = {
+ { STA_LOAD_BIN_BIN, 32, 0, 8388607 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_0_fields[] = {
+ { STA_LOAD_BPS_RX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_rx_1_fields[] = {
+ { STA_LOAD_BPS_RX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_0_fields[] = {
+ { STA_LOAD_BPS_TX_0_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_bps_tx_1_fields[] = {
+ { STA_LOAD_BPS_TX_1_BPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_0_fields[] = {
+ { STA_LOAD_PPS_RX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_rx_1_fields[] = {
+ { STA_LOAD_PPS_RX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_0_fields[] = {
+ { STA_LOAD_PPS_TX_0_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_load_pps_tx_1_fields[] = {
+ { STA_LOAD_PPS_TX_1_PPS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_pckt_fields[] = {
+ { STA_PCKT_CNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s sta_status_fields[] = {
+ { STA_STATUS_STAT_TOGGLE_MISSED, 1, 0, 0x0000 },
+};
+
+static nthw_fpga_register_init_s sta_registers[] = {
+ { STA_BYTE, 4, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_byte_fields },
+ { STA_CFG, 0, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, sta_cfg_fields },
+ { STA_CV_ERR, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_cv_err_fields },
+ { STA_FCS_ERR, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_fcs_err_fields },
+ { STA_HOST_ADR_LSB, 1, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_lsb_fields },
+ { STA_HOST_ADR_MSB, 2, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, sta_host_adr_msb_fields },
+ { STA_LOAD_BIN, 8, 32, NTHW_FPGA_REG_TYPE_WO, 8388607, 1, sta_load_bin_fields },
+ { STA_LOAD_BPS_RX_0, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_0_fields },
+ { STA_LOAD_BPS_RX_1, 13, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_rx_1_fields },
+ { STA_LOAD_BPS_TX_0, 15, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_0_fields },
+ { STA_LOAD_BPS_TX_1, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_bps_tx_1_fields },
+ { STA_LOAD_PPS_RX_0, 10, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_0_fields },
+ { STA_LOAD_PPS_RX_1, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_rx_1_fields },
+ { STA_LOAD_PPS_TX_0, 14, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_0_fields },
+ { STA_LOAD_PPS_TX_1, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_load_pps_tx_1_fields },
+ { STA_PCKT, 3, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, sta_pckt_fields },
+ { STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2537,6 +2626,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_CPY, 0, MOD_CPY, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 9216, 26, cpy_registers },
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
+ { MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2695,5 +2785,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 35, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index 150b9dd976..a2ab266931 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -19,5 +19,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RAC, "RAC" },
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
+ { MOD_STA, "STA" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
index 640ffcbc52..0cd183fcaa 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_sta.h
@@ -7,11 +7,17 @@
#define _NTHW_FPGA_REG_DEFS_STA_
/* STA */
+#define STA_BYTE (0xa08364d4UL)
+#define STA_BYTE_CNT (0x3119e6bcUL)
#define STA_CFG (0xcecaf9f4UL)
#define STA_CFG_CNT_CLEAR (0xc325e12eUL)
#define STA_CFG_CNT_FRZ (0x8c27a596UL)
#define STA_CFG_DMA_ENA (0x940dbacUL)
#define STA_CFG_TX_DISABLE (0x30f43250UL)
+#define STA_CV_ERR (0x7db7db5dUL)
+#define STA_CV_ERR_CNT (0x2c02fbbeUL)
+#define STA_FCS_ERR (0xa0de1647UL)
+#define STA_FCS_ERR_CNT (0xc68c37d1UL)
#define STA_HOST_ADR_LSB (0xde569336UL)
#define STA_HOST_ADR_LSB_LSB (0xb6f2f94bUL)
#define STA_HOST_ADR_MSB (0xdf94f901UL)
@@ -34,6 +40,8 @@
#define STA_LOAD_PPS_TX_0_PPS (0x788a7a7bUL)
#define STA_LOAD_PPS_TX_1 (0xd37d1c89UL)
#define STA_LOAD_PPS_TX_1_PPS (0x45ea53cbUL)
+#define STA_PCKT (0xecc8f30aUL)
+#define STA_PCKT_CNT (0x63291d16UL)
#define STA_STATUS (0x91c5c51cUL)
#define STA_STATUS_STAT_TOGGLE_MISSED (0xf7242b11UL)
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 60/80] net/ntnic: add TSM module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (58 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 59/80] net/ntnic: add STA module Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 61/80] net/ntnic: add xStats Serhii Iliushyk
` (19 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
fpga map was extended with tsm module
support which enable statistics functionality.
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
.../supported/nthw_fpga_9563_055_049_0000.c | 394 +++++++++++++++++-
.../nthw/supported/nthw_fpga_mod_str_map.c | 1 +
.../nthw/supported/nthw_fpga_reg_defs_tsm.h | 177 ++++++++
4 files changed, 572 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e5d5abd0ed..64351bcdc7 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -12,6 +12,7 @@ Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
+Basic stats = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index efdb084cd6..620968ceb6 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -2575,6 +2575,397 @@ static nthw_fpga_register_init_s sta_registers[] = {
{ STA_STATUS, 7, 1, NTHW_FPGA_REG_TYPE_RC1, 0, 1, sta_status_fields },
};
+static nthw_fpga_field_init_s tsm_con0_config_fields[] = {
+ { TSM_CON0_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON0_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON0_CONFIG_PORT, 3, 0, 0 }, { TSM_CON0_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON0_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_interface_fields[] = {
+ { TSM_CON0_INTERFACE_EX_TERM, 2, 0, 3 }, { TSM_CON0_INTERFACE_IN_REF_PWM, 8, 12, 128 },
+ { TSM_CON0_INTERFACE_PWM_ENA, 1, 2, 0 }, { TSM_CON0_INTERFACE_RESERVED, 1, 3, 0 },
+ { TSM_CON0_INTERFACE_VTERM_PWM, 8, 4, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_hi_fields[] = {
+ { TSM_CON0_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con0_sample_lo_fields[] = {
+ { TSM_CON0_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_config_fields[] = {
+ { TSM_CON1_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON1_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON1_CONFIG_PORT, 3, 0, 0 }, { TSM_CON1_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON1_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_hi_fields[] = {
+ { TSM_CON1_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con1_sample_lo_fields[] = {
+ { TSM_CON1_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_config_fields[] = {
+ { TSM_CON2_CONFIG_BLIND, 5, 8, 9 }, { TSM_CON2_CONFIG_DC_SRC, 3, 5, 0 },
+ { TSM_CON2_CONFIG_PORT, 3, 0, 0 }, { TSM_CON2_CONFIG_PPSIN_2_5V, 1, 13, 0 },
+ { TSM_CON2_CONFIG_SAMPLE_EDGE, 2, 3, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_hi_fields[] = {
+ { TSM_CON2_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con2_sample_lo_fields[] = {
+ { TSM_CON2_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_config_fields[] = {
+ { TSM_CON3_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON3_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON3_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_hi_fields[] = {
+ { TSM_CON3_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con3_sample_lo_fields[] = {
+ { TSM_CON3_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_config_fields[] = {
+ { TSM_CON4_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON4_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON4_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_hi_fields[] = {
+ { TSM_CON4_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con4_sample_lo_fields[] = {
+ { TSM_CON4_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_config_fields[] = {
+ { TSM_CON5_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON5_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON5_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_hi_fields[] = {
+ { TSM_CON5_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con5_sample_lo_fields[] = {
+ { TSM_CON5_SAMPLE_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_config_fields[] = {
+ { TSM_CON6_CONFIG_BLIND, 5, 5, 26 },
+ { TSM_CON6_CONFIG_PORT, 3, 0, 1 },
+ { TSM_CON6_CONFIG_SAMPLE_EDGE, 2, 3, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_hi_fields[] = {
+ { TSM_CON6_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con6_sample_lo_fields[] = {
+ { TSM_CON6_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_hi_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_con7_host_sample_lo_fields[] = {
+ { TSM_CON7_HOST_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_config_fields[] = {
+ { TSM_CONFIG_NTTS_SRC, 2, 5, 0 }, { TSM_CONFIG_NTTS_SYNC, 1, 4, 0 },
+ { TSM_CONFIG_TIMESET_EDGE, 2, 8, 1 }, { TSM_CONFIG_TIMESET_SRC, 3, 10, 0 },
+ { TSM_CONFIG_TIMESET_UP, 1, 7, 0 }, { TSM_CONFIG_TS_FORMAT, 4, 0, 1 },
+};
+
+static nthw_fpga_field_init_s tsm_int_config_fields[] = {
+ { TSM_INT_CONFIG_AUTO_DISABLE, 1, 0, 0 },
+ { TSM_INT_CONFIG_MASK, 19, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_int_stat_fields[] = {
+ { TSM_INT_STAT_CAUSE, 19, 1, 0 },
+ { TSM_INT_STAT_ENABLE, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_led_fields[] = {
+ { TSM_LED_LED0_BG_COLOR, 2, 3, 0 }, { TSM_LED_LED0_COLOR, 2, 1, 0 },
+ { TSM_LED_LED0_MODE, 1, 0, 0 }, { TSM_LED_LED0_SRC, 4, 5, 0 },
+ { TSM_LED_LED1_BG_COLOR, 2, 12, 0 }, { TSM_LED_LED1_COLOR, 2, 10, 0 },
+ { TSM_LED_LED1_MODE, 1, 9, 0 }, { TSM_LED_LED1_SRC, 4, 14, 1 },
+ { TSM_LED_LED2_BG_COLOR, 2, 21, 0 }, { TSM_LED_LED2_COLOR, 2, 19, 0 },
+ { TSM_LED_LED2_MODE, 1, 18, 0 }, { TSM_LED_LED2_SRC, 4, 23, 2 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_config_fields[] = {
+ { TSM_NTTS_CONFIG_AUTO_HARDSET, 1, 5, 1 },
+ { TSM_NTTS_CONFIG_EXT_CLK_ADJ, 1, 6, 0 },
+ { TSM_NTTS_CONFIG_HIGH_SAMPLE, 1, 4, 0 },
+ { TSM_NTTS_CONFIG_TS_SRC_FORMAT, 4, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ext_stat_fields[] = {
+ { TSM_NTTS_EXT_STAT_MASTER_ID, 8, 16, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_REV, 8, 24, 0x0000 },
+ { TSM_NTTS_EXT_STAT_MASTER_STAT, 16, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_hi_fields[] = {
+ { TSM_NTTS_LIMIT_HI_SEC, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_limit_lo_fields[] = {
+ { TSM_NTTS_LIMIT_LO_NS, 32, 0, 100000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_offset_fields[] = {
+ { TSM_NTTS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_hi_fields[] = {
+ { TSM_NTTS_SAMPLE_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_sample_lo_fields[] = {
+ { TSM_NTTS_SAMPLE_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_stat_fields[] = {
+ { TSM_NTTS_STAT_NTTS_VALID, 1, 0, 0 },
+ { TSM_NTTS_STAT_SIGNAL_LOST, 8, 1, 0 },
+ { TSM_NTTS_STAT_SYNC_LOST, 8, 9, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_hi_fields[] = {
+ { TSM_NTTS_TS_T0_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_lo_fields[] = {
+ { TSM_NTTS_TS_T0_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ntts_ts_t0_offset_fields[] = {
+ { TSM_NTTS_TS_T0_OFFSET_COUNT, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_ctrl_fields[] = {
+ { TSM_PB_CTRL_INSTMEM_WR, 1, 1, 0 },
+ { TSM_PB_CTRL_RST, 1, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pb_instmem_fields[] = {
+ { TSM_PB_INSTMEM_MEM_ADDR, 14, 0, 0 },
+ { TSM_PB_INSTMEM_MEM_DATA, 18, 14, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_i_fields[] = {
+ { TSM_PI_CTRL_I_VAL, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_ki_fields[] = {
+ { TSM_PI_CTRL_KI_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_kp_fields[] = {
+ { TSM_PI_CTRL_KP_GAIN, 24, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_pi_ctrl_shl_fields[] = {
+ { TSM_PI_CTRL_SHL_VAL, 4, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_stat_fields[] = {
+ { TSM_STAT_HARD_SYNC, 8, 8, 0 }, { TSM_STAT_LINK_CON0, 1, 0, 0 },
+ { TSM_STAT_LINK_CON1, 1, 1, 0 }, { TSM_STAT_LINK_CON2, 1, 2, 0 },
+ { TSM_STAT_LINK_CON3, 1, 3, 0 }, { TSM_STAT_LINK_CON4, 1, 4, 0 },
+ { TSM_STAT_LINK_CON5, 1, 5, 0 }, { TSM_STAT_NTTS_INSYNC, 1, 6, 0 },
+ { TSM_STAT_PTP_MI_PRESENT, 1, 7, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_ctrl_fields[] = {
+ { TSM_TIMER_CTRL_TIMER_EN_T0, 1, 0, 0 },
+ { TSM_TIMER_CTRL_TIMER_EN_T1, 1, 1, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t0_fields[] = {
+ { TSM_TIMER_T0_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_timer_t1_fields[] = {
+ { TSM_TIMER_T1_MAX_COUNT, 30, 0, 50000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_hi_fields[] = {
+ { TSM_TIME_HARDSET_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hardset_lo_fields[] = {
+ { TSM_TIME_HARDSET_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_hi_fields[] = {
+ { TSM_TIME_HI_SEC, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_lo_fields[] = {
+ { TSM_TIME_LO_NS, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_time_rate_adj_fields[] = {
+ { TSM_TIME_RATE_ADJ_FRACTION, 29, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_hi_fields[] = {
+ { TSM_TS_HI_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_lo_fields[] = {
+ { TSM_TS_LO_TIME, 32, 0, 0x0000 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_offset_fields[] = {
+ { TSM_TS_OFFSET_NS, 30, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_fields[] = {
+ { TSM_TS_STAT_OVERRUN, 1, 16, 0 },
+ { TSM_TS_STAT_SAMPLES, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_hi_offset_fields[] = {
+ { TSM_TS_STAT_HI_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_lo_offset_fields[] = {
+ { TSM_TS_STAT_LO_OFFSET_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_hi_fields[] = {
+ { TSM_TS_STAT_TAR_HI_SEC, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_tar_lo_fields[] = {
+ { TSM_TS_STAT_TAR_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x_fields[] = {
+ { TSM_TS_STAT_X_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_hi_fields[] = {
+ { TSM_TS_STAT_X2_HI_NS, 16, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_ts_stat_x2_lo_fields[] = {
+ { TSM_TS_STAT_X2_LO_NS, 32, 0, 0 },
+};
+
+static nthw_fpga_field_init_s tsm_utc_offset_fields[] = {
+ { TSM_UTC_OFFSET_SEC, 8, 0, 0 },
+};
+
+static nthw_fpga_register_init_s tsm_registers[] = {
+ { TSM_CON0_CONFIG, 24, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con0_config_fields },
+ {
+ TSM_CON0_INTERFACE, 25, 20, NTHW_FPGA_REG_TYPE_RW, 524291, 5,
+ tsm_con0_interface_fields
+ },
+ { TSM_CON0_SAMPLE_HI, 27, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_hi_fields },
+ { TSM_CON0_SAMPLE_LO, 26, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con0_sample_lo_fields },
+ { TSM_CON1_CONFIG, 28, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con1_config_fields },
+ { TSM_CON1_SAMPLE_HI, 30, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_hi_fields },
+ { TSM_CON1_SAMPLE_LO, 29, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con1_sample_lo_fields },
+ { TSM_CON2_CONFIG, 31, 14, NTHW_FPGA_REG_TYPE_RW, 2320, 5, tsm_con2_config_fields },
+ { TSM_CON2_SAMPLE_HI, 33, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_hi_fields },
+ { TSM_CON2_SAMPLE_LO, 32, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con2_sample_lo_fields },
+ { TSM_CON3_CONFIG, 34, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con3_config_fields },
+ { TSM_CON3_SAMPLE_HI, 36, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_hi_fields },
+ { TSM_CON3_SAMPLE_LO, 35, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con3_sample_lo_fields },
+ { TSM_CON4_CONFIG, 37, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con4_config_fields },
+ { TSM_CON4_SAMPLE_HI, 39, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_hi_fields },
+ { TSM_CON4_SAMPLE_LO, 38, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con4_sample_lo_fields },
+ { TSM_CON5_CONFIG, 40, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con5_config_fields },
+ { TSM_CON5_SAMPLE_HI, 42, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_hi_fields },
+ { TSM_CON5_SAMPLE_LO, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con5_sample_lo_fields },
+ { TSM_CON6_CONFIG, 43, 10, NTHW_FPGA_REG_TYPE_RW, 841, 3, tsm_con6_config_fields },
+ { TSM_CON6_SAMPLE_HI, 45, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_hi_fields },
+ { TSM_CON6_SAMPLE_LO, 44, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_con6_sample_lo_fields },
+ {
+ TSM_CON7_HOST_SAMPLE_HI, 47, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_hi_fields
+ },
+ {
+ TSM_CON7_HOST_SAMPLE_LO, 46, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_con7_host_sample_lo_fields
+ },
+ { TSM_CONFIG, 0, 13, NTHW_FPGA_REG_TYPE_RW, 257, 6, tsm_config_fields },
+ { TSM_INT_CONFIG, 2, 20, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_int_config_fields },
+ { TSM_INT_STAT, 3, 20, NTHW_FPGA_REG_TYPE_MIXED, 0, 2, tsm_int_stat_fields },
+ { TSM_LED, 4, 27, NTHW_FPGA_REG_TYPE_RW, 16793600, 12, tsm_led_fields },
+ { TSM_NTTS_CONFIG, 13, 7, NTHW_FPGA_REG_TYPE_RW, 32, 4, tsm_ntts_config_fields },
+ { TSM_NTTS_EXT_STAT, 15, 32, NTHW_FPGA_REG_TYPE_MIXED, 0, 3, tsm_ntts_ext_stat_fields },
+ { TSM_NTTS_LIMIT_HI, 23, 16, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_limit_hi_fields },
+ { TSM_NTTS_LIMIT_LO, 22, 32, NTHW_FPGA_REG_TYPE_RW, 100000, 1, tsm_ntts_limit_lo_fields },
+ { TSM_NTTS_OFFSET, 21, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ntts_offset_fields },
+ { TSM_NTTS_SAMPLE_HI, 19, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_hi_fields },
+ { TSM_NTTS_SAMPLE_LO, 18, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_sample_lo_fields },
+ { TSM_NTTS_STAT, 14, 17, NTHW_FPGA_REG_TYPE_RO, 0, 3, tsm_ntts_stat_fields },
+ { TSM_NTTS_TS_T0_HI, 17, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_hi_fields },
+ { TSM_NTTS_TS_T0_LO, 16, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ntts_ts_t0_lo_fields },
+ {
+ TSM_NTTS_TS_T0_OFFSET, 20, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ntts_ts_t0_offset_fields
+ },
+ { TSM_PB_CTRL, 63, 2, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_ctrl_fields },
+ { TSM_PB_INSTMEM, 64, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, tsm_pb_instmem_fields },
+ { TSM_PI_CTRL_I, 54, 32, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_i_fields },
+ { TSM_PI_CTRL_KI, 52, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_ki_fields },
+ { TSM_PI_CTRL_KP, 51, 24, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_pi_ctrl_kp_fields },
+ { TSM_PI_CTRL_SHL, 53, 4, NTHW_FPGA_REG_TYPE_WO, 0, 1, tsm_pi_ctrl_shl_fields },
+ { TSM_STAT, 1, 16, NTHW_FPGA_REG_TYPE_RO, 0, 9, tsm_stat_fields },
+ { TSM_TIMER_CTRL, 48, 2, NTHW_FPGA_REG_TYPE_RW, 0, 2, tsm_timer_ctrl_fields },
+ { TSM_TIMER_T0, 49, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t0_fields },
+ { TSM_TIMER_T1, 50, 30, NTHW_FPGA_REG_TYPE_RW, 50000, 1, tsm_timer_t1_fields },
+ { TSM_TIME_HARDSET_HI, 12, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_hi_fields },
+ { TSM_TIME_HARDSET_LO, 11, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_time_hardset_lo_fields },
+ { TSM_TIME_HI, 9, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_hi_fields },
+ { TSM_TIME_LO, 8, 32, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_lo_fields },
+ { TSM_TIME_RATE_ADJ, 10, 29, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_time_rate_adj_fields },
+ { TSM_TS_HI, 6, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_hi_fields },
+ { TSM_TS_LO, 5, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_lo_fields },
+ { TSM_TS_OFFSET, 7, 30, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_ts_offset_fields },
+ { TSM_TS_STAT, 55, 17, NTHW_FPGA_REG_TYPE_RO, 0, 2, tsm_ts_stat_fields },
+ {
+ TSM_TS_STAT_HI_OFFSET, 62, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_hi_offset_fields
+ },
+ {
+ TSM_TS_STAT_LO_OFFSET, 61, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1,
+ tsm_ts_stat_lo_offset_fields
+ },
+ { TSM_TS_STAT_TAR_HI, 57, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_hi_fields },
+ { TSM_TS_STAT_TAR_LO, 56, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_tar_lo_fields },
+ { TSM_TS_STAT_X, 58, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x_fields },
+ { TSM_TS_STAT_X2_HI, 60, 16, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_hi_fields },
+ { TSM_TS_STAT_X2_LO, 59, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, tsm_ts_stat_x2_lo_fields },
+ { TSM_UTC_OFFSET, 65, 8, NTHW_FPGA_REG_TYPE_RW, 0, 1, tsm_utc_offset_fields },
+};
+
static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_CAT, 0, MOD_CAT, 0, 21, NTHW_FPGA_BUS_TYPE_RAB1, 768, 34, cat_registers },
{ MOD_CSU, 0, MOD_CSU, 0, 0, NTHW_FPGA_BUS_TYPE_RAB1, 9728, 2, csu_registers },
@@ -2627,6 +3018,7 @@ static nthw_fpga_module_init_s fpga_modules[] = {
{ MOD_TX_INS, 0, MOD_INS, 0, 2, NTHW_FPGA_BUS_TYPE_RAB1, 8704, 2, ins_registers },
{ MOD_TX_RPL, 0, MOD_RPL, 0, 4, NTHW_FPGA_BUS_TYPE_RAB1, 8960, 6, rpl_registers },
{ MOD_STA, 0, MOD_STA, 0, 9, NTHW_FPGA_BUS_TYPE_RAB0, 2048, 17, sta_registers },
+ { MOD_TSM, 0, MOD_TSM, 0, 8, NTHW_FPGA_BUS_TYPE_RAB2, 1024, 66, tsm_registers },
};
static nthw_fpga_prod_param_s product_parameters[] = {
@@ -2785,5 +3177,5 @@ static nthw_fpga_prod_param_s product_parameters[] = {
};
nthw_fpga_prod_init_s nthw_fpga_9563_055_049_0000 = {
- 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 36, fpga_modules,
+ 200, 9563, 55, 49, 0, 0, 1726740521, 152, product_parameters, 37, fpga_modules,
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
index a2ab266931..e8ed7faf0d 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_mod_str_map.c
@@ -20,5 +20,6 @@ const struct nthw_fpga_mod_str_s sa_nthw_fpga_mod_str_map[] = {
{ MOD_RST9563, "RST9563" },
{ MOD_SDC, "SDC" },
{ MOD_STA, "STA" },
+ { MOD_TSM, "TSM" },
{ 0UL, NULL }
};
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
index a087850aa4..cdb733ee17 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_tsm.h
@@ -7,8 +7,158 @@
#define _NTHW_FPGA_REG_DEFS_TSM_
/* TSM */
+#define TSM_CON0_CONFIG (0xf893d371UL)
+#define TSM_CON0_CONFIG_BLIND (0x59ccfcbUL)
+#define TSM_CON0_CONFIG_DC_SRC (0x1879812bUL)
+#define TSM_CON0_CONFIG_PORT (0x3ff0bb08UL)
+#define TSM_CON0_CONFIG_PPSIN_2_5V (0xb8e78227UL)
+#define TSM_CON0_CONFIG_SAMPLE_EDGE (0x4a4022ebUL)
+#define TSM_CON0_INTERFACE (0x76e93b59UL)
+#define TSM_CON0_INTERFACE_EX_TERM (0xd079b416UL)
+#define TSM_CON0_INTERFACE_IN_REF_PWM (0x16f73c33UL)
+#define TSM_CON0_INTERFACE_PWM_ENA (0x3629e73fUL)
+#define TSM_CON0_INTERFACE_RESERVED (0xf9c5066UL)
+#define TSM_CON0_INTERFACE_VTERM_PWM (0x6d2b1e23UL)
+#define TSM_CON0_SAMPLE_HI (0x6e536b8UL)
+#define TSM_CON0_SAMPLE_HI_SEC (0x5fc26159UL)
+#define TSM_CON0_SAMPLE_LO (0x8bea5689UL)
+#define TSM_CON0_SAMPLE_LO_NS (0x13d0010dUL)
+#define TSM_CON1_CONFIG (0x3439d3efUL)
+#define TSM_CON1_CONFIG_BLIND (0x98932ebdUL)
+#define TSM_CON1_CONFIG_DC_SRC (0xa1825ac3UL)
+#define TSM_CON1_CONFIG_PORT (0xe266628dUL)
+#define TSM_CON1_CONFIG_PPSIN_2_5V (0x6f05027fUL)
+#define TSM_CON1_CONFIG_SAMPLE_EDGE (0x2f2719adUL)
+#define TSM_CON1_SAMPLE_HI (0xc76be978UL)
+#define TSM_CON1_SAMPLE_HI_SEC (0xe639bab1UL)
+#define TSM_CON1_SAMPLE_LO (0x4a648949UL)
+#define TSM_CON1_SAMPLE_LO_NS (0x8edfe07bUL)
+#define TSM_CON2_CONFIG (0xbab6d40cUL)
+#define TSM_CON2_CONFIG_BLIND (0xe4f20b66UL)
+#define TSM_CON2_CONFIG_DC_SRC (0xb0ff30baUL)
+#define TSM_CON2_CONFIG_PORT (0x5fac0e43UL)
+#define TSM_CON2_CONFIG_PPSIN_2_5V (0xcc5384d6UL)
+#define TSM_CON2_CONFIG_SAMPLE_EDGE (0x808e5467UL)
+#define TSM_CON2_SAMPLE_HI (0x5e898f79UL)
+#define TSM_CON2_SAMPLE_HI_SEC (0xf744d0c8UL)
+#define TSM_CON2_SAMPLE_LO (0xd386ef48UL)
+#define TSM_CON2_SAMPLE_LO_NS (0xf2bec5a0UL)
+#define TSM_CON3_CONFIG (0x761cd492UL)
+#define TSM_CON3_CONFIG_BLIND (0x79fdea10UL)
+#define TSM_CON3_CONFIG_PORT (0x823ad7c6UL)
+#define TSM_CON3_CONFIG_SAMPLE_EDGE (0xe5e96f21UL)
+#define TSM_CON3_SAMPLE_HI (0x9f0750b9UL)
+#define TSM_CON3_SAMPLE_HI_SEC (0x4ebf0b20UL)
+#define TSM_CON3_SAMPLE_LO (0x12083088UL)
+#define TSM_CON3_SAMPLE_LO_NS (0x6fb124d6UL)
+#define TSM_CON4_CONFIG (0x7cd9dd8bUL)
+#define TSM_CON4_CONFIG_BLIND (0x1c3040d0UL)
+#define TSM_CON4_CONFIG_PORT (0xff49d19eUL)
+#define TSM_CON4_CONFIG_SAMPLE_EDGE (0x4adc9b2UL)
+#define TSM_CON4_SAMPLE_HI (0xb63c453aUL)
+#define TSM_CON4_SAMPLE_HI_SEC (0xd5be043aUL)
+#define TSM_CON4_SAMPLE_LO (0x3b33250bUL)
+#define TSM_CON4_SAMPLE_LO_NS (0xa7c8e16UL)
+#define TSM_CON5_CONFIG (0xb073dd15UL)
+#define TSM_CON5_CONFIG_BLIND (0x813fa1a6UL)
+#define TSM_CON5_CONFIG_PORT (0x22df081bUL)
+#define TSM_CON5_CONFIG_SAMPLE_EDGE (0x61caf2f4UL)
+#define TSM_CON5_SAMPLE_HI (0x77b29afaUL)
+#define TSM_CON5_SAMPLE_HI_SEC (0x6c45dfd2UL)
+#define TSM_CON5_SAMPLE_LO (0xfabdfacbUL)
+#define TSM_CON5_SAMPLE_LO_TIME (0x945d87e8UL)
+#define TSM_CON6_CONFIG (0x3efcdaf6UL)
+#define TSM_CON6_CONFIG_BLIND (0xfd5e847dUL)
+#define TSM_CON6_CONFIG_PORT (0x9f1564d5UL)
+#define TSM_CON6_CONFIG_SAMPLE_EDGE (0xce63bf3eUL)
+#define TSM_CON6_SAMPLE_HI (0xee50fcfbUL)
+#define TSM_CON6_SAMPLE_HI_SEC (0x7d38b5abUL)
+#define TSM_CON6_SAMPLE_LO (0x635f9ccaUL)
+#define TSM_CON6_SAMPLE_LO_NS (0xeb124abbUL)
+#define TSM_CON7_HOST_SAMPLE_HI (0xdcd90e52UL)
+#define TSM_CON7_HOST_SAMPLE_HI_SEC (0xd98d3618UL)
+#define TSM_CON7_HOST_SAMPLE_LO (0x51d66e63UL)
+#define TSM_CON7_HOST_SAMPLE_LO_NS (0x8f5594ddUL)
#define TSM_CONFIG (0xef5dec83UL)
+#define TSM_CONFIG_NTTS_SRC (0x1b60227bUL)
+#define TSM_CONFIG_NTTS_SYNC (0x43e0a69dUL)
+#define TSM_CONFIG_TIMESET_EDGE (0x8c381127UL)
+#define TSM_CONFIG_TIMESET_SRC (0xe7590a31UL)
+#define TSM_CONFIG_TIMESET_UP (0x561980c1UL)
#define TSM_CONFIG_TS_FORMAT (0xe6efc2faUL)
+#define TSM_INT_CONFIG (0x9a0d52dUL)
+#define TSM_INT_CONFIG_AUTO_DISABLE (0x9581470UL)
+#define TSM_INT_CONFIG_MASK (0xf00cd3d7UL)
+#define TSM_INT_STAT (0xa4611a70UL)
+#define TSM_INT_STAT_CAUSE (0x315168cfUL)
+#define TSM_INT_STAT_ENABLE (0x980a12d1UL)
+#define TSM_LED (0x6ae05f87UL)
+#define TSM_LED_LED0_BG_COLOR (0x897cf9eeUL)
+#define TSM_LED_LED0_COLOR (0x6d7ada39UL)
+#define TSM_LED_LED0_MODE (0x6087b644UL)
+#define TSM_LED_LED0_SRC (0x4fe29639UL)
+#define TSM_LED_LED1_BG_COLOR (0x66be92d0UL)
+#define TSM_LED_LED1_COLOR (0xcb0dd18dUL)
+#define TSM_LED_LED1_MODE (0xabdb65e1UL)
+#define TSM_LED_LED1_SRC (0x7282bf89UL)
+#define TSM_LED_LED2_BG_COLOR (0x8d8929d3UL)
+#define TSM_LED_LED2_COLOR (0xfae5cb10UL)
+#define TSM_LED_LED2_MODE (0x2d4f174fUL)
+#define TSM_LED_LED2_SRC (0x3522c559UL)
+#define TSM_NTTS_CONFIG (0x8bc38bdeUL)
+#define TSM_NTTS_CONFIG_AUTO_HARDSET (0xd75be25dUL)
+#define TSM_NTTS_CONFIG_EXT_CLK_ADJ (0x700425b6UL)
+#define TSM_NTTS_CONFIG_HIGH_SAMPLE (0x37135b7eUL)
+#define TSM_NTTS_CONFIG_TS_SRC_FORMAT (0x6e6e707UL)
+#define TSM_NTTS_EXT_STAT (0x2b0315b7UL)
+#define TSM_NTTS_EXT_STAT_MASTER_ID (0xf263315eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_REV (0xd543795eUL)
+#define TSM_NTTS_EXT_STAT_MASTER_STAT (0x92d96f5eUL)
+#define TSM_NTTS_LIMIT_HI (0x1ddaa85fUL)
+#define TSM_NTTS_LIMIT_HI_SEC (0x315c6ef2UL)
+#define TSM_NTTS_LIMIT_LO (0x90d5c86eUL)
+#define TSM_NTTS_LIMIT_LO_NS (0xe6d94d9aUL)
+#define TSM_NTTS_OFFSET (0x6436e72UL)
+#define TSM_NTTS_OFFSET_NS (0x12d43a06UL)
+#define TSM_NTTS_SAMPLE_HI (0xcdc8aa3eUL)
+#define TSM_NTTS_SAMPLE_HI_SEC (0x4f6588fdUL)
+#define TSM_NTTS_SAMPLE_LO (0x40c7ca0fUL)
+#define TSM_NTTS_SAMPLE_LO_NS (0x6e43ff97UL)
+#define TSM_NTTS_STAT (0x6502b820UL)
+#define TSM_NTTS_STAT_NTTS_VALID (0x3e184471UL)
+#define TSM_NTTS_STAT_SIGNAL_LOST (0x178bedfdUL)
+#define TSM_NTTS_STAT_SYNC_LOST (0xe4cd53dfUL)
+#define TSM_NTTS_TS_T0_HI (0x1300d1b6UL)
+#define TSM_NTTS_TS_T0_HI_TIME (0xa016ae4fUL)
+#define TSM_NTTS_TS_T0_LO (0x9e0fb187UL)
+#define TSM_NTTS_TS_T0_LO_TIME (0x82006941UL)
+#define TSM_NTTS_TS_T0_OFFSET (0xbf70ce4fUL)
+#define TSM_NTTS_TS_T0_OFFSET_COUNT (0x35dd4398UL)
+#define TSM_PB_CTRL (0x7a8b60faUL)
+#define TSM_PB_CTRL_INSTMEM_WR (0xf96e2cbcUL)
+#define TSM_PB_CTRL_RESET (0xa38ade8bUL)
+#define TSM_PB_CTRL_RST (0x3aaa82f4UL)
+#define TSM_PB_INSTMEM (0xb54aeecUL)
+#define TSM_PB_INSTMEM_MEM_ADDR (0x9ac79b6eUL)
+#define TSM_PB_INSTMEM_MEM_DATA (0x65aefa38UL)
+#define TSM_PI_CTRL_I (0x8d71a4e2UL)
+#define TSM_PI_CTRL_I_VAL (0x98baedc9UL)
+#define TSM_PI_CTRL_KI (0xa1bd86cbUL)
+#define TSM_PI_CTRL_KI_GAIN (0x53faa916UL)
+#define TSM_PI_CTRL_KP (0xc5d62e0bUL)
+#define TSM_PI_CTRL_KP_GAIN (0x7723fa45UL)
+#define TSM_PI_CTRL_SHL (0xaa518701UL)
+#define TSM_PI_CTRL_SHL_VAL (0x56f56a6fUL)
+#define TSM_STAT (0xa55bf677UL)
+#define TSM_STAT_HARD_SYNC (0x7fff20fdUL)
+#define TSM_STAT_LINK_CON0 (0x216086f0UL)
+#define TSM_STAT_LINK_CON1 (0x5667b666UL)
+#define TSM_STAT_LINK_CON2 (0xcf6ee7dcUL)
+#define TSM_STAT_LINK_CON3 (0xb869d74aUL)
+#define TSM_STAT_LINK_CON4 (0x260d42e9UL)
+#define TSM_STAT_LINK_CON5 (0x510a727fUL)
+#define TSM_STAT_NTTS_INSYNC (0xb593a245UL)
+#define TSM_STAT_PTP_MI_PRESENT (0x43131eb0UL)
#define TSM_TIMER_CTRL (0x648da051UL)
#define TSM_TIMER_CTRL_TIMER_EN_T0 (0x17cee154UL)
#define TSM_TIMER_CTRL_TIMER_EN_T1 (0x60c9d1c2UL)
@@ -16,13 +166,40 @@
#define TSM_TIMER_T0_MAX_COUNT (0xaa601706UL)
#define TSM_TIMER_T1 (0x36752733UL)
#define TSM_TIMER_T1_MAX_COUNT (0x6beec8c6UL)
+#define TSM_TIME_HARDSET_HI (0xf28bdb46UL)
+#define TSM_TIME_HARDSET_HI_TIME (0x2d9a28baUL)
+#define TSM_TIME_HARDSET_LO (0x7f84bb77UL)
+#define TSM_TIME_HARDSET_LO_TIME (0xf8cefb4UL)
#define TSM_TIME_HI (0x175acea1UL)
#define TSM_TIME_HI_SEC (0xc0e9c9a1UL)
#define TSM_TIME_LO (0x9a55ae90UL)
#define TSM_TIME_LO_NS (0x879c5c4bUL)
+#define TSM_TIME_RATE_ADJ (0xb1cc4bb1UL)
+#define TSM_TIME_RATE_ADJ_FRACTION (0xb7ab96UL)
#define TSM_TS_HI (0xccfe9e5eUL)
#define TSM_TS_HI_TIME (0xc23fed30UL)
#define TSM_TS_LO (0x41f1fe6fUL)
#define TSM_TS_LO_TIME (0xe0292a3eUL)
+#define TSM_TS_OFFSET (0x4b2e6e13UL)
+#define TSM_TS_OFFSET_NS (0x68c286b9UL)
+#define TSM_TS_STAT (0x64d41b8cUL)
+#define TSM_TS_STAT_OVERRUN (0xad9db92aUL)
+#define TSM_TS_STAT_SAMPLES (0xb6350e0bUL)
+#define TSM_TS_STAT_HI_OFFSET (0x1aa2ddf2UL)
+#define TSM_TS_STAT_HI_OFFSET_NS (0xeb040e0fUL)
+#define TSM_TS_STAT_LO_OFFSET (0x81218579UL)
+#define TSM_TS_STAT_LO_OFFSET_NS (0xb7ff33UL)
+#define TSM_TS_STAT_TAR_HI (0x65af24b6UL)
+#define TSM_TS_STAT_TAR_HI_SEC (0x7e92f619UL)
+#define TSM_TS_STAT_TAR_LO (0xe8a04487UL)
+#define TSM_TS_STAT_TAR_LO_NS (0xf7b3f439UL)
+#define TSM_TS_STAT_X (0x419f0ddUL)
+#define TSM_TS_STAT_X_NS (0xa48c3f27UL)
+#define TSM_TS_STAT_X2_HI (0xd6b1c517UL)
+#define TSM_TS_STAT_X2_HI_NS (0x4288c50fUL)
+#define TSM_TS_STAT_X2_LO (0x5bbea526UL)
+#define TSM_TS_STAT_X2_LO_NS (0x92633c13UL)
+#define TSM_UTC_OFFSET (0xf622a13aUL)
+#define TSM_UTC_OFFSET_SEC (0xd9c80209UL)
#endif /* _NTHW_FPGA_REG_DEFS_TSM_ */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 61/80] net/ntnic: add xStats
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (59 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 60/80] net/ntnic: add TSM module Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 62/80] net/ntnic: added flow statistics Serhii Iliushyk
` (18 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Extended statistics implementation and
initialization were added.
Extended set of operations for eth dev with xstats support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 2 +
drivers/net/ntnic/include/ntnic_stat.h | 36 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 112 +++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +
drivers/net/ntnic/ntnic_mod_reg.h | 28 +
drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c | 829 ++++++++++++++++++
8 files changed, 1024 insertions(+)
create mode 100644 drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 64351bcdc7..947c7ba3a1 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -13,6 +13,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
Basic stats = Y
+Extended stats = Y
Linux = Y
x86-64 = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 6e3a290a5c..bae27ce1ce 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -64,6 +64,8 @@ Features
- Several RSS hash keys, one for each flow type.
- Default RSS operation with no hash key specification.
- Port and queue statistics.
+- RMON statistics in extended stats.
+- Link state information.
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/include/ntnic_stat.h b/drivers/net/ntnic/include/ntnic_stat.h
index 0735dbc085..4d4affa3cf 100644
--- a/drivers/net/ntnic/include/ntnic_stat.h
+++ b/drivers/net/ntnic/include/ntnic_stat.h
@@ -169,6 +169,39 @@ struct port_counters_v2 {
};
struct flm_counters_v1 {
+ /* FLM 0.17 */
+ uint64_t current;
+ uint64_t learn_done;
+ uint64_t learn_ignore;
+ uint64_t learn_fail;
+ uint64_t unlearn_done;
+ uint64_t unlearn_ignore;
+ uint64_t auto_unlearn_done;
+ uint64_t auto_unlearn_ignore;
+ uint64_t auto_unlearn_fail;
+ uint64_t timeout_unlearn_done;
+ uint64_t rel_done;
+ uint64_t rel_ignore;
+ /* FLM 0.20 */
+ uint64_t prb_done;
+ uint64_t prb_ignore;
+ uint64_t sta_done;
+ uint64_t inf_done;
+ uint64_t inf_skip;
+ uint64_t pck_hit;
+ uint64_t pck_miss;
+ uint64_t pck_unh;
+ uint64_t pck_dis;
+ uint64_t csh_hit;
+ uint64_t csh_miss;
+ uint64_t csh_unh;
+ uint64_t cuc_start;
+ uint64_t cuc_move;
+ /* FLM 0.17 Load */
+ uint64_t load_lps;
+ uint64_t load_aps;
+ uint64_t max_lps;
+ uint64_t max_aps;
};
struct nt4ga_stat_s {
@@ -200,6 +233,9 @@ struct nt4ga_stat_s {
struct host_buffer_counters *mp_stat_structs_hb;
struct port_load_counters *mp_port_load;
+ int flm_stat_ver;
+ struct flm_counters_v1 *mp_stat_structs_flm;
+
/* Rx/Tx totals: */
uint64_t n_totals_reset_timestamp; /* timestamp for last totals reset */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index a6c4fec0be..e59ac5bdb3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -31,6 +31,7 @@ sources = files(
'link_mgmt/nt4ga_link.c',
'nim/i2c_nim.c',
'ntnic_filter/ntnic_filter.c',
+ 'ntnic_xstats/ntnic_xstats.c',
'nthw/dbs/nthw_dbs.c',
'nthw/supported/nthw_fpga_9563_055_049_0000.c',
'nthw/supported/nthw_fpga_instances.c',
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 8a9ca2c03d..5635bd3b42 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1496,6 +1496,113 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
return 0;
}
+static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+ int nb_xstats;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ nb_xstats =
+ ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return nb_xstats;
+}
+
+static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ int if_index = internals->n_intf_no;
+
+ struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ return dpdk_stats_reset(internals, p_nt_drv, if_index);
+}
+
+static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names(p_nt4ga_stat, xstats_names, size);
+}
+
+static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
+ const uint64_t *ids,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ const struct ntnic_xstats_ops *ntnic_xstats_ops = get_ntnic_xstats_ops();
+
+ if (ntnic_xstats_ops == NULL) {
+ NT_LOG(INF, NTNIC, "ntnic_xstats module not included");
+ return -1;
+ }
+
+ return ntnic_xstats_ops->nthw_xstats_get_names_by_id(p_nt4ga_stat, xstats_names, ids,
+ size);
+}
+
static int
promiscuous_enable(struct rte_eth_dev __rte_unused(*dev))
{
@@ -1592,6 +1699,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
.flow_ops_get = dev_flow_ops_get,
+ .xstats_get = eth_xstats_get,
+ .xstats_get_names = eth_xstats_get_names,
+ .xstats_reset = eth_xstats_reset,
+ .xstats_get_by_id = eth_xstats_get_by_id,
+ .xstats_get_names_by_id = eth_xstats_get_names_by_id,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 355e2032b1..6737d18a6f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -192,3 +192,18 @@ const struct rte_flow_ops *get_dev_flow_ops(void)
return dev_flow_ops;
}
+
+static struct ntnic_xstats_ops *ntnic_xstats_ops;
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops)
+{
+ ntnic_xstats_ops = ops;
+}
+
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void)
+{
+ if (ntnic_xstats_ops == NULL)
+ ntnic_xstats_ops_init();
+
+ return ntnic_xstats_ops;
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 8703d478b6..65e7972c68 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,10 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+
+#include "rte_ethdev.h"
+#include "rte_flow_driver.h"
+
#include "flow_api.h"
#include "stream_binary_flow_api.h"
#include "nthw_fpga_model.h"
@@ -354,4 +358,28 @@ void register_flow_filter_ops(const struct flow_filter_ops *ops);
const struct flow_filter_ops *get_flow_filter_ops(void);
void init_flow_filter(void);
+struct ntnic_xstats_ops {
+ int (*nthw_xstats_get_names)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
+ int (*nthw_xstats_get)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port);
+ void (*nthw_xstats_reset)(nt4ga_stat_t *p_nt4ga_stat, uint8_t port);
+ int (*nthw_xstats_get_names_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size);
+ int (*nthw_xstats_get_by_id)(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port);
+};
+
+void register_ntnic_xstats_ops(struct ntnic_xstats_ops *ops);
+struct ntnic_xstats_ops *get_ntnic_xstats_ops(void);
+void ntnic_xstats_ops_init(void);
+
#endif /* __NTNIC_MOD_REG_H__ */
diff --git a/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
new file mode 100644
index 0000000000..7604afe6a0
--- /dev/null
+++ b/drivers/net/ntnic/ntnic_xstats/ntnic_xstats.c
@@ -0,0 +1,829 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <rte_ethdev.h>
+
+#include "include/ntdrv_4ga.h"
+#include "ntlog.h"
+#include "nthw_drv.h"
+#include "nthw_fpga.h"
+#include "stream_binary_flow_api.h"
+#include "ntnic_mod_reg.h"
+
+struct rte_nthw_xstats_names_s {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint8_t source;
+ unsigned int offset;
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.17
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v1[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * FLM 0.18
+ */
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v2[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) }
+};
+
+/*
+ * Extended stat for Capture/Inline - implements RMON
+ * STA 0.9
+ */
+
+static struct rte_nthw_xstats_names_s nthw_cap_xstats_names_v3[] = {
+ { "rx_drop_events", 1, offsetof(struct port_counters_v2, drop_events) },
+ { "rx_octets", 1, offsetof(struct port_counters_v2, octets) },
+ { "rx_packets", 1, offsetof(struct port_counters_v2, pkts) },
+ { "rx_broadcast_packets", 1, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "rx_multicast_packets", 1, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "rx_unicast_packets", 1, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "rx_align_errors", 1, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "rx_code_violation_errors", 1, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "rx_crc_errors", 1, offsetof(struct port_counters_v2, pkts_crc) },
+ { "rx_undersize_packets", 1, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "rx_oversize_packets", 1, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "rx_fragments", 1, offsetof(struct port_counters_v2, fragments) },
+ {
+ "rx_jabbers_not_truncated", 1,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "rx_jabbers_truncated", 1, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "rx_size_64_packets", 1, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "rx_size_65_to_127_packets", 1,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "rx_size_128_to_255_packets", 1,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "rx_size_256_to_511_packets", 1,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "rx_size_512_to_1023_packets", 1,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "rx_size_1024_to_1518_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "rx_size_1519_to_2047_packets", 1,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "rx_size_2048_to_4095_packets", 1,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "rx_size_4096_to_8191_packets", 1,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "rx_size_8192_to_max_packets", 1,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+ { "rx_ip_checksum_error", 1, offsetof(struct port_counters_v2, pkts_ip_chksum_error) },
+ { "rx_udp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_udp_chksum_error) },
+ { "rx_tcp_checksum_error", 1, offsetof(struct port_counters_v2, pkts_tcp_chksum_error) },
+
+ { "tx_drop_events", 2, offsetof(struct port_counters_v2, drop_events) },
+ { "tx_octets", 2, offsetof(struct port_counters_v2, octets) },
+ { "tx_packets", 2, offsetof(struct port_counters_v2, pkts) },
+ { "tx_broadcast_packets", 2, offsetof(struct port_counters_v2, broadcast_pkts) },
+ { "tx_multicast_packets", 2, offsetof(struct port_counters_v2, multicast_pkts) },
+ { "tx_unicast_packets", 2, offsetof(struct port_counters_v2, unicast_pkts) },
+ { "tx_align_errors", 2, offsetof(struct port_counters_v2, pkts_alignment) },
+ { "tx_code_violation_errors", 2, offsetof(struct port_counters_v2, pkts_code_violation) },
+ { "tx_crc_errors", 2, offsetof(struct port_counters_v2, pkts_crc) },
+ { "tx_undersize_packets", 2, offsetof(struct port_counters_v2, undersize_pkts) },
+ { "tx_oversize_packets", 2, offsetof(struct port_counters_v2, oversize_pkts) },
+ { "tx_fragments", 2, offsetof(struct port_counters_v2, fragments) },
+ {
+ "tx_jabbers_not_truncated", 2,
+ offsetof(struct port_counters_v2, jabbers_not_truncated)
+ },
+ { "tx_jabbers_truncated", 2, offsetof(struct port_counters_v2, jabbers_truncated) },
+ { "tx_size_64_packets", 2, offsetof(struct port_counters_v2, pkts_64_octets) },
+ {
+ "tx_size_65_to_127_packets", 2,
+ offsetof(struct port_counters_v2, pkts_65_to_127_octets)
+ },
+ {
+ "tx_size_128_to_255_packets", 2,
+ offsetof(struct port_counters_v2, pkts_128_to_255_octets)
+ },
+ {
+ "tx_size_256_to_511_packets", 2,
+ offsetof(struct port_counters_v2, pkts_256_to_511_octets)
+ },
+ {
+ "tx_size_512_to_1023_packets", 2,
+ offsetof(struct port_counters_v2, pkts_512_to_1023_octets)
+ },
+ {
+ "tx_size_1024_to_1518_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1024_to_1518_octets)
+ },
+ {
+ "tx_size_1519_to_2047_packets", 2,
+ offsetof(struct port_counters_v2, pkts_1519_to_2047_octets)
+ },
+ {
+ "tx_size_2048_to_4095_packets", 2,
+ offsetof(struct port_counters_v2, pkts_2048_to_4095_octets)
+ },
+ {
+ "tx_size_4096_to_8191_packets", 2,
+ offsetof(struct port_counters_v2, pkts_4096_to_8191_octets)
+ },
+ {
+ "tx_size_8192_to_max_packets", 2,
+ offsetof(struct port_counters_v2, pkts_8192_to_max_octets)
+ },
+
+ /* FLM 0.17 */
+ { "flm_count_current", 3, offsetof(struct flm_counters_v1, current) },
+ { "flm_count_learn_done", 3, offsetof(struct flm_counters_v1, learn_done) },
+ { "flm_count_learn_ignore", 3, offsetof(struct flm_counters_v1, learn_ignore) },
+ { "flm_count_learn_fail", 3, offsetof(struct flm_counters_v1, learn_fail) },
+ { "flm_count_unlearn_done", 3, offsetof(struct flm_counters_v1, unlearn_done) },
+ { "flm_count_unlearn_ignore", 3, offsetof(struct flm_counters_v1, unlearn_ignore) },
+ { "flm_count_auto_unlearn_done", 3, offsetof(struct flm_counters_v1, auto_unlearn_done) },
+ {
+ "flm_count_auto_unlearn_ignore", 3,
+ offsetof(struct flm_counters_v1, auto_unlearn_ignore)
+ },
+ { "flm_count_auto_unlearn_fail", 3, offsetof(struct flm_counters_v1, auto_unlearn_fail) },
+ {
+ "flm_count_timeout_unlearn_done", 3,
+ offsetof(struct flm_counters_v1, timeout_unlearn_done)
+ },
+ { "flm_count_rel_done", 3, offsetof(struct flm_counters_v1, rel_done) },
+ { "flm_count_rel_ignore", 3, offsetof(struct flm_counters_v1, rel_ignore) },
+ { "flm_count_prb_done", 3, offsetof(struct flm_counters_v1, prb_done) },
+ { "flm_count_prb_ignore", 3, offsetof(struct flm_counters_v1, prb_ignore) },
+
+ /* FLM 0.20 */
+ { "flm_count_sta_done", 3, offsetof(struct flm_counters_v1, sta_done) },
+ { "flm_count_inf_done", 3, offsetof(struct flm_counters_v1, inf_done) },
+ { "flm_count_inf_skip", 3, offsetof(struct flm_counters_v1, inf_skip) },
+ { "flm_count_pck_hit", 3, offsetof(struct flm_counters_v1, pck_hit) },
+ { "flm_count_pck_miss", 3, offsetof(struct flm_counters_v1, pck_miss) },
+ { "flm_count_pck_unh", 3, offsetof(struct flm_counters_v1, pck_unh) },
+ { "flm_count_pck_dis", 3, offsetof(struct flm_counters_v1, pck_dis) },
+ { "flm_count_csh_hit", 3, offsetof(struct flm_counters_v1, csh_hit) },
+ { "flm_count_csh_miss", 3, offsetof(struct flm_counters_v1, csh_miss) },
+ { "flm_count_csh_unh", 3, offsetof(struct flm_counters_v1, csh_unh) },
+ { "flm_count_cuc_start", 3, offsetof(struct flm_counters_v1, cuc_start) },
+ { "flm_count_cuc_move", 3, offsetof(struct flm_counters_v1, cuc_move) },
+
+ /* FLM 0.17 */
+ { "flm_count_load_lps", 3, offsetof(struct flm_counters_v1, load_lps) },
+ { "flm_count_load_aps", 3, offsetof(struct flm_counters_v1, load_aps) },
+ { "flm_count_max_lps", 3, offsetof(struct flm_counters_v1, max_lps) },
+ { "flm_count_max_aps", 3, offsetof(struct flm_counters_v1, max_aps) },
+
+ { "rx_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps) },
+ { "rx_max_packet_per_second", 4, offsetof(struct port_load_counters, rx_pps_max) },
+ { "rx_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps) },
+ { "rx_max_bits_per_second", 4, offsetof(struct port_load_counters, rx_bps_max) },
+ { "tx_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps) },
+ { "tx_max_packet_per_second", 4, offsetof(struct port_load_counters, tx_pps_max) },
+ { "tx_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps) },
+ { "tx_max_bits_per_second", 4, offsetof(struct port_load_counters, tx_bps_max) }
+};
+
+#define NTHW_CAP_XSTATS_NAMES_V1 RTE_DIM(nthw_cap_xstats_names_v1)
+#define NTHW_CAP_XSTATS_NAMES_V2 RTE_DIM(nthw_cap_xstats_names_v2)
+#define NTHW_CAP_XSTATS_NAMES_V3 RTE_DIM(nthw_cap_xstats_names_v3)
+
+/*
+ * Container for the reset values
+ */
+#define NTHW_XSTATS_SIZE NTHW_CAP_XSTATS_NAMES_V3
+
+static uint64_t nthw_xstats_reset_val[NUM_ADAPTER_PORTS_MAX][NTHW_XSTATS_SIZE] = { 0 };
+
+/*
+ * These functions must only be called with stat mutex locked
+ */
+static int nthw_xstats_get(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat *stats,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n && i < nb_names; i++) {
+ stats[i].id = i;
+
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ stats[i].value = *((uint64_t *)&rx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 2:
+ /* TX stat */
+ stats[i].value = *((uint64_t *)&tx_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[port][i];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ stats[i].value = *((uint64_t *)&flm_ptr[names[i].offset]) -
+ nthw_xstats_reset_val[0][i];
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ stats[i].value = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ stats[i].value = 0;
+ }
+
+ break;
+
+ default:
+ stats[i].value = 0;
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int nthw_xstats_get_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n,
+ uint8_t port)
+{
+ unsigned int i;
+ uint8_t *pld_ptr;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+ int count = 0;
+
+ pld_ptr = (uint8_t *)&p_nt4ga_stat->mp_port_load[port];
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] < nb_names) {
+ switch (names[ids[i]].source) {
+ case 1:
+ /* RX stat */
+ values[i] = *((uint64_t *)&rx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 2:
+ /* TX stat */
+ values[i] = *((uint64_t *)&tx_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[port][ids[i]];
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ if (flm_ptr) {
+ values[i] = *((uint64_t *)&flm_ptr[names[ids[i]].offset]) -
+ nthw_xstats_reset_val[0][ids[i]];
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ case 4:
+
+ /* Port Load stat */
+ if (pld_ptr) {
+ /* No reset */
+ values[i] = *((uint64_t *)&pld_ptr[names[i].offset]);
+
+ } else {
+ values[i] = 0;
+ }
+
+ break;
+
+ default:
+ values[i] = 0;
+ break;
+ }
+
+ count++;
+ }
+ }
+
+ return count;
+}
+
+static void nthw_xstats_reset(nt4ga_stat_t *p_nt4ga_stat, uint8_t port)
+{
+ unsigned int i;
+ uint8_t *flm_ptr;
+ uint8_t *rx_ptr;
+ uint8_t *tx_ptr;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ flm_ptr = (uint8_t *)p_nt4ga_stat->mp_stat_structs_flm;
+ rx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_rx[port];
+ tx_ptr = (uint8_t *)&p_nt4ga_stat->cap.mp_stat_structs_port_tx[port];
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ for (i = 0; i < nb_names; i++) {
+ switch (names[i].source) {
+ case 1:
+ /* RX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&rx_ptr[names[i].offset]);
+ break;
+
+ case 2:
+ /* TX stat */
+ nthw_xstats_reset_val[port][i] = *((uint64_t *)&tx_ptr[names[i].offset]);
+ break;
+
+ case 3:
+
+ /* FLM stat */
+ /* Reset makes no sense for flm_count_current */
+ /* Reset can't be used for load_lps, load_aps, max_lps and max_aps */
+ if (flm_ptr &&
+ (strcmp(names[i].name, "flm_count_current") != 0 &&
+ strcmp(names[i].name, "flm_count_load_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_load_aps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_lps") != 0 &&
+ strcmp(names[i].name, "flm_count_max_aps") != 0)) {
+ nthw_xstats_reset_val[0][i] =
+ *((uint64_t *)&flm_ptr[names[i].offset]);
+ }
+
+ break;
+
+ case 4:
+ /* Port load stat*/
+ /* No reset */
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/*
+ * These functions does not require stat mutex locked
+ */
+static int nthw_xstats_get_names(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size && i < nb_names; i++) {
+ strlcpy(xstats_names[i].name, names[i].name, sizeof(xstats_names[i].name));
+ count++;
+ }
+
+ return count;
+}
+
+static int nthw_xstats_get_names_by_id(nt4ga_stat_t *p_nt4ga_stat,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int size)
+{
+ int count = 0;
+ unsigned int i;
+
+ uint32_t nb_names;
+ struct rte_nthw_xstats_names_s *names;
+
+ if (p_nt4ga_stat->flm_stat_ver < 18) {
+ names = nthw_cap_xstats_names_v1;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V1;
+
+ } else if (p_nt4ga_stat->mp_nthw_stat->mn_stat_layout_version < 7 ||
+ p_nt4ga_stat->flm_stat_ver < 23) {
+ names = nthw_cap_xstats_names_v2;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V2;
+
+ } else {
+ names = nthw_cap_xstats_names_v3;
+ nb_names = NTHW_CAP_XSTATS_NAMES_V3;
+ }
+
+ if (!xstats_names)
+ return nb_names;
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] < nb_names) {
+ strlcpy(xstats_names[i].name,
+ names[ids[i]].name,
+ RTE_ETH_XSTATS_NAME_SIZE);
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+static struct ntnic_xstats_ops ops = {
+ .nthw_xstats_get_names = nthw_xstats_get_names,
+ .nthw_xstats_get = nthw_xstats_get,
+ .nthw_xstats_reset = nthw_xstats_reset,
+ .nthw_xstats_get_names_by_id = nthw_xstats_get_names_by_id,
+ .nthw_xstats_get_by_id = nthw_xstats_get_by_id
+};
+
+void ntnic_xstats_ops_init(void)
+{
+ NT_LOG_DBGX(DBG, NTNIC, "xstats module was initialized");
+ register_ntnic_xstats_ops(&ops);
+}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 62/80] net/ntnic: added flow statistics
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (60 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 61/80] net/ntnic: add xStats Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 63/80] net/ntnic: add scrub registers Serhii Iliushyk
` (17 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
xstats was extended with flow statistics support.
Additional counters that shows learn, unlearn, lps, aps
and other.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
.../net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c | 40 ++++
drivers/net/ntnic/include/hw_mod_backend.h | 3 +
drivers/net/ntnic/include/ntdrv_4ga.h | 1 +
drivers/net/ntnic/meson.build | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 11 +-
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 142 ++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.c | 176 ++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 52 ++++++
.../profile_inline/flow_api_profile_inline.c | 46 +++++
.../profile_inline/flow_api_profile_inline.h | 6 +
drivers/net/ntnic/nthw/rte_pmd_ntnic.h | 43 +++++
drivers/net/ntnic/ntnic_ethdev.c | 132 +++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 7 +
14 files changed, 657 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
create mode 100644 drivers/net/ntnic/nthw/rte_pmd_ntnic.h
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index bae27ce1ce..47960ca3f1 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -66,6 +66,7 @@ Features
- Port and queue statistics.
- RMON statistics in extended stats.
- Link state information.
+- Flow statistics
Limitations
~~~~~~~~~~~
diff --git a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
index 3afc5b7853..8fedfdcd04 100644
--- a/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
+++ b/drivers/net/ntnic/adapter/nt4ga_stat/nt4ga_stat.c
@@ -189,6 +189,24 @@ static int nt4ga_stat_setup(struct adapter_info_s *p_adapter_info)
return -1;
}
+ if (get_flow_filter_ops() != NULL) {
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+ p_nt4ga_stat->flm_stat_ver = ndev->be.flm.ver;
+ p_nt4ga_stat->mp_stat_structs_flm = calloc(1, sizeof(struct flm_counters_v1));
+
+ if (!p_nt4ga_stat->mp_stat_structs_flm) {
+ NT_LOG_DBGX(ERR, GENERAL, "Cannot allocate mem.");
+ return -1;
+ }
+
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_APS_MAX, 0);
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps =
+ nthw_fpga_get_product_param(p_adapter_info->fpga_info.mp_fpga,
+ NT_FLM_LOAD_LPS_MAX, 0);
+ }
+
p_nt4ga_stat->mp_port_load =
calloc(NUM_ADAPTER_PORTS_MAX, sizeof(struct port_load_counters));
@@ -236,6 +254,7 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
return -1;
nthw_stat_t *p_nthw_stat = p_nt4ga_stat->mp_nthw_stat;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
const int n_rx_ports = p_nt4ga_stat->mn_rx_ports;
const int n_tx_ports = p_nt4ga_stat->mn_tx_ports;
@@ -542,6 +561,27 @@ static int nt4ga_stat_collect_cap_v1_stats(struct adapter_info_s *p_adapter_info
(uint64_t)(((__uint128_t)val * 32ULL) / PORT_LOAD_WINDOWS_SIZE);
}
+ /* Update and get FLM stats */
+ flow_filter_ops->flow_get_flm_stats(ndev, (uint64_t *)p_nt4ga_stat->mp_stat_structs_flm,
+ sizeof(struct flm_counters_v1) / sizeof(uint64_t));
+
+ /*
+ * Calculate correct load values:
+ * rpp = nthw_fpga_get_product_param(p_fpga, NT_RPP_PER_PS, 0);
+ * bin = (uint32_t)(((FLM_LOAD_WINDOWS_SIZE * 1000000000000ULL) / (32ULL * rpp)) - 1ULL);
+ * load_aps = ((uint64_t)load_aps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ * load_lps = ((uint64_t)load_lps * 1000000000000ULL) / (uint64_t)((bin+1) * rpp);
+ *
+ * Simplified it gives:
+ *
+ * load_lps = (load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ * load_aps = (load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE
+ */
+
+ p_nt4ga_stat->mp_stat_structs_flm->load_aps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_aps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
+ p_nt4ga_stat->mp_stat_structs_flm->load_lps =
+ (p_nt4ga_stat->mp_stat_structs_flm->load_lps * 32ULL) / FLM_LOAD_WINDOWS_SIZE;
return 0;
}
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 17d5755634..9cd9d92823 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 38e4d0ca35..677aa7b6c8 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -17,6 +17,7 @@ typedef struct ntdrv_4ga_s {
rte_thread_t flm_thread;
pthread_mutex_t stat_lck;
rte_thread_t stat_thread;
+ rte_thread_t port_event_thread;
} ntdrv_4ga_t;
#endif /* __NTDRV_4GA_H__ */
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index e59ac5bdb3..c0b7729929 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -59,6 +59,7 @@ sources = files(
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
+ 'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
'nthw/flow_api/profile_inline/flow_api_hw_db_inline.c',
'nthw/flow_api/flow_backend/flow_backend.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index d5a4b0b10c..0e9fc33dec 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1016,11 +1016,14 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
- (void)ndev;
- (void)data;
- (void)size;
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL)
+ return -1;
+
+ if (ndev->flow_profile == FLOW_ETH_DEV_PROFILE_INLINE)
+ return profile_inline_ops->flow_get_flm_stats_profile_inline(ndev, data, size);
- NT_LOG_DBGX(DBG, FILTER, "Not implemented yet");
return -1;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index f4c29b8bde..1845f74166 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,148 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_stat_update(be->be_dev, &be->flm);
+}
+
+int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_STAT_LRN_DONE:
+ *value = be->flm.v25.lrn_done->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_IGNORE:
+ *value = be->flm.v25.lrn_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_LRN_FAIL:
+ *value = be->flm.v25.lrn_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_DONE:
+ *value = be->flm.v25.unl_done->cnt;
+ break;
+
+ case HW_FLM_STAT_UNL_IGNORE:
+ *value = be->flm.v25.unl_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_DONE:
+ *value = be->flm.v25.rel_done->cnt;
+ break;
+
+ case HW_FLM_STAT_REL_IGNORE:
+ *value = be->flm.v25.rel_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_DONE:
+ *value = be->flm.v25.prb_done->cnt;
+ break;
+
+ case HW_FLM_STAT_PRB_IGNORE:
+ *value = be->flm.v25.prb_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_DONE:
+ *value = be->flm.v25.aul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_IGNORE:
+ *value = be->flm.v25.aul_ignore->cnt;
+ break;
+
+ case HW_FLM_STAT_AUL_FAIL:
+ *value = be->flm.v25.aul_fail->cnt;
+ break;
+
+ case HW_FLM_STAT_TUL_DONE:
+ *value = be->flm.v25.tul_done->cnt;
+ break;
+
+ case HW_FLM_STAT_FLOWS:
+ *value = be->flm.v25.flows->cnt;
+ break;
+
+ case HW_FLM_LOAD_LPS:
+ *value = be->flm.v25.load_lps->lps;
+ break;
+
+ case HW_FLM_LOAD_APS:
+ *value = be->flm.v25.load_aps->aps;
+ break;
+
+ default: {
+ if (_VER_ < 18)
+ return UNSUP_FIELD;
+
+ switch (field) {
+ case HW_FLM_STAT_STA_DONE:
+ *value = be->flm.v25.sta_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_DONE:
+ *value = be->flm.v25.inf_done->cnt;
+ break;
+
+ case HW_FLM_STAT_INF_SKIP:
+ *value = be->flm.v25.inf_skip->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_HIT:
+ *value = be->flm.v25.pck_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_MISS:
+ *value = be->flm.v25.pck_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_UNH:
+ *value = be->flm.v25.pck_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_PCK_DIS:
+ *value = be->flm.v25.pck_dis->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_HIT:
+ *value = be->flm.v25.csh_hit->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_MISS:
+ *value = be->flm.v25.csh_miss->cnt;
+ break;
+
+ case HW_FLM_STAT_CSH_UNH:
+ *value = be->flm.v25.csh_unh->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_START:
+ *value = be->flm.v25.cuc_start->cnt;
+ break;
+
+ case HW_FLM_STAT_CUC_MOVE:
+ *value = be->flm.v25.cuc_move->cnt;
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+ }
+ break;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e field,
const uint32_t *value, uint32_t records,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
new file mode 100644
index 0000000000..98b0e8347a
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -0,0 +1,176 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ring.h>
+#include <rte_errno.h>
+
+#include "ntlog.h"
+#include "flm_evt_queue.h"
+
+/* Local queues for flm statistic events */
+static struct rte_ring *info_q_local[MAX_INFO_LCL_QUEUES];
+
+/* Remote queues for flm statistic events */
+static struct rte_ring *info_q_remote[MAX_INFO_RMT_QUEUES];
+
+/* Local queues for flm status records */
+static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
+
+/* Remote queues for flm status records */
+static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+
+
+static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
+{
+ static_assert((FLM_EVT_ELEM_SIZE & ~(size_t)3) == FLM_EVT_ELEM_SIZE,
+ "FLM EVENT struct size");
+ static_assert((FLM_STAT_ELEM_SIZE & ~(size_t)3) == FLM_STAT_ELEM_SIZE,
+ "FLM STAT struct size");
+ char name[20] = "NONE";
+ struct rte_ring *q;
+ uint32_t elem_size = 0;
+ uint32_t queue_size = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port >= MAX_INFO_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_INFO_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port >= MAX_INFO_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM statistic event queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_INFO_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_INFO%u", port);
+ elem_size = FLM_EVT_ELEM_SIZE;
+ queue_size = FLM_EVT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port >= MAX_STAT_LCL_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_STAT_LCL_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "LOCAL_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port >= MAX_STAT_RMT_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM status queue cannot be created for vport %u. Max supported vport is %u",
+ port,
+ MAX_STAT_RMT_QUEUES - 1);
+ return NULL;
+ }
+
+ snprintf(name, 20, "REMOTE_STAT%u", port);
+ elem_size = FLM_STAT_ELEM_SIZE;
+ queue_size = FLM_STAT_QUEUE_SIZE;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue create illegal caller: %u", caller);
+ return NULL;
+ }
+
+ q = rte_ring_create_elem(name,
+ elem_size,
+ queue_size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN, FILTER, "FLM queues cannot be created due to error %02X", rte_errno);
+ return NULL;
+ }
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ info_q_local[port] = q;
+ break;
+
+ case FLM_INFO_REMOTE:
+ info_q_remote[port] = q;
+ break;
+
+ case FLM_STAT_LOCAL:
+ stat_q_local[port] = q;
+ break;
+
+ case FLM_STAT_REMOTE:
+ stat_q_remote[port] = q;
+ break;
+
+ default:
+ break;
+ }
+
+ return q;
+}
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES) {
+ if (info_q_local[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_local[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_LOCAL) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES) {
+ if (info_q_remote[port] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(info_q_remote[port],
+ obj,
+ FLM_EVT_ELEM_SIZE);
+ return ret;
+ }
+
+ if (flm_evt_queue_create(port, FLM_INFO_REMOTE) != NULL) {
+ /* Recursive call to get data */
+ return flm_inf_queue_get(port, remote, obj);
+ }
+ }
+
+ return -ENOENT;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
new file mode 100644
index 0000000000..238be7a3b2
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -0,0 +1,52 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_EVT_QUEUE_H_
+#define _FLM_EVT_QUEUE_H_
+
+#include "stdint.h"
+#include "stdbool.h"
+
+struct flm_status_event_s {
+ void *flow;
+ uint32_t learn_ignore : 1;
+ uint32_t learn_failed : 1;
+ uint32_t learn_done : 1;
+};
+
+struct flm_info_event_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum {
+ FLM_INFO_LOCAL,
+ FLM_INFO_REMOTE,
+ FLM_STAT_LOCAL,
+ FLM_STAT_REMOTE,
+};
+
+/* Max number of local queues */
+#define MAX_INFO_LCL_QUEUES 8
+#define MAX_STAT_LCL_QUEUES 8
+
+/* Max number of remote queues */
+#define MAX_INFO_RMT_QUEUES 128
+#define MAX_STAT_RMT_QUEUES 128
+
+/* queue size */
+#define FLM_EVT_QUEUE_SIZE 8192
+#define FLM_STAT_QUEUE_SIZE 8192
+
+/* Event element size */
+#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
+#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+
+int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+
+#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 21d0df56e5..c676e20601 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -4462,6 +4462,48 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
return 0;
}
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
+{
+ const enum hw_flm_e fields[] = {
+ HW_FLM_STAT_FLOWS, HW_FLM_STAT_LRN_DONE, HW_FLM_STAT_LRN_IGNORE,
+ HW_FLM_STAT_LRN_FAIL, HW_FLM_STAT_UNL_DONE, HW_FLM_STAT_UNL_IGNORE,
+ HW_FLM_STAT_AUL_DONE, HW_FLM_STAT_AUL_IGNORE, HW_FLM_STAT_AUL_FAIL,
+ HW_FLM_STAT_TUL_DONE, HW_FLM_STAT_REL_DONE, HW_FLM_STAT_REL_IGNORE,
+ HW_FLM_STAT_PRB_DONE, HW_FLM_STAT_PRB_IGNORE,
+
+ HW_FLM_STAT_STA_DONE, HW_FLM_STAT_INF_DONE, HW_FLM_STAT_INF_SKIP,
+ HW_FLM_STAT_PCK_HIT, HW_FLM_STAT_PCK_MISS, HW_FLM_STAT_PCK_UNH,
+ HW_FLM_STAT_PCK_DIS, HW_FLM_STAT_CSH_HIT, HW_FLM_STAT_CSH_MISS,
+ HW_FLM_STAT_CSH_UNH, HW_FLM_STAT_CUC_START, HW_FLM_STAT_CUC_MOVE,
+
+ HW_FLM_LOAD_LPS, HW_FLM_LOAD_APS,
+ };
+
+ const uint64_t fields_cnt = sizeof(fields) / sizeof(enum hw_flm_e);
+
+ if (!ndev->flow_mgnt_prepared)
+ return 0;
+
+ if (size < fields_cnt)
+ return -1;
+
+ hw_mod_flm_stat_update(&ndev->be);
+
+ for (uint64_t i = 0; i < fields_cnt; ++i) {
+ uint32_t value = 0;
+ hw_mod_flm_stat_get(&ndev->be, fields[i], &value);
+ data[i] = (fields[i] == HW_FLM_STAT_FLOWS || fields[i] == HW_FLM_LOAD_LPS ||
+ fields[i] == HW_FLM_LOAD_APS)
+ ? value
+ : data[i] + value;
+
+ if (ndev->be.flm.ver < 18 && fields[i] == HW_FLM_STAT_PRB_IGNORE)
+ break;
+ }
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4478,6 +4520,10 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ /*
+ * Stats
+ */
+ .flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index c695842077..b44d3a7291 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -52,4 +52,10 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+/*
+ * Stats
+ */
+
+int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/rte_pmd_ntnic.h b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
new file mode 100644
index 0000000000..4a1ba18a5e
--- /dev/null
+++ b/drivers/net/ntnic/nthw/rte_pmd_ntnic.h
@@ -0,0 +1,43 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#ifndef NTNIC_EVENT_H_
+#define NTNIC_EVENT_H_
+
+#include <rte_ethdev.h>
+
+typedef struct ntnic_flm_load_s {
+ uint64_t lookup;
+ uint64_t lookup_maximum;
+ uint64_t access;
+ uint64_t access_maximum;
+} ntnic_flm_load_t;
+
+typedef struct ntnic_port_load_s {
+ uint64_t rx_pps;
+ uint64_t rx_pps_maximum;
+ uint64_t tx_pps;
+ uint64_t tx_pps_maximum;
+ uint64_t rx_bps;
+ uint64_t rx_bps_maximum;
+ uint64_t tx_bps;
+ uint64_t tx_bps_maximum;
+} ntnic_port_load_t;
+
+struct ntnic_flm_statistic_s {
+ uint64_t bytes;
+ uint64_t packets;
+ uint64_t timestamp;
+ uint64_t id;
+ uint8_t cause;
+};
+
+enum rte_ntnic_event_type {
+ RTE_NTNIC_FLM_LOAD_EVENT = RTE_ETH_EVENT_MAX,
+ RTE_NTNIC_PORT_LOAD_EVENT,
+ RTE_NTNIC_FLM_STATS_EVENT,
+};
+
+#endif /* NTNIC_EVENT_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 5635bd3b42..4a0dafeff0 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,8 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_evt_queue.h"
+#include "rte_pmd_ntnic.h"
const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL };
#define THREAD_CREATE(a, b, c) rte_thread_create(a, &thread_attr, b, c)
@@ -1419,6 +1421,7 @@ drv_deinit(struct drv_s *p_drv)
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
+ THREAD_JOIN(p_nt_drv->port_event_thread);
}
/* stop adapter */
@@ -1709,6 +1712,123 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.rss_hash_conf_get = rss_hash_conf_get,
};
+/*
+ * Port event thread
+ */
+THREAD_FUNC port_event_thread_fn(void *context)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct drv_s *p_drv = internals->p_drv;
+ ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
+ struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
+ struct flow_nic_dev *ndev = p_adapter_info->nt4ga_filter.mp_flow_device;
+
+ nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[internals->port_id];
+ uint8_t port_no = internals->port;
+
+ ntnic_flm_load_t flmdata;
+ ntnic_port_load_t portdata;
+
+ memset(&flmdata, 0, sizeof(flmdata));
+ memset(&portdata, 0, sizeof(portdata));
+
+ while (ndev != NULL && ndev->eth_base == NULL)
+ nt_os_wait_usec(1 * 1000 * 1000);
+
+ while (!p_drv->ntdrv.b_shutdown) {
+ /*
+ * FLM load measurement
+ * Do only send event, if there has been a change
+ */
+ if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
+ if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
+ flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
+ flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
+ flmdata.lookup_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_lps;
+ flmdata.access_maximum =
+ p_nt4ga_stat->mp_stat_structs_flm->max_aps;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_FLM_LOAD_EVENT,
+ &flmdata);
+ }
+ }
+ }
+
+ /*
+ * Port load measurement
+ * Do only send event, if there has been a change.
+ */
+ if (p_nt4ga_stat->mp_port_load) {
+ if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
+ portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
+ pthread_mutex_lock(&p_nt_drv->stat_lck);
+ portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
+ portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
+ portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
+ portdata.tx_pps = p_nt4ga_stat->mp_port_load[port_no].tx_pps;
+ portdata.rx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_pps_max;
+ portdata.tx_pps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_pps_max;
+ portdata.rx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
+ portdata.tx_bps_maximum =
+ p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
+ pthread_mutex_unlock(&p_nt_drv->stat_lck);
+
+ if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)RTE_NTNIC_PORT_LOAD_EVENT,
+ &portdata);
+ }
+ }
+ }
+
+ /* Process events */
+ {
+ int count = 0;
+ bool do_wait = true;
+
+ while (count < 5000) {
+ /* Local FLM statistic events */
+ struct flm_info_event_s data;
+
+ if (flm_inf_queue_get(port_no, FLM_INFO_LOCAL, &data) == 0) {
+ if (eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ struct ntnic_flm_statistic_s event_data;
+ event_data.bytes = data.bytes;
+ event_data.packets = data.packets;
+ event_data.cause = data.cause;
+ event_data.id = data.id;
+ event_data.timestamp = data.timestamp;
+ rte_eth_dev_callback_process(eth_dev,
+ (enum rte_eth_event_type)
+ RTE_NTNIC_FLM_STATS_EVENT,
+ &event_data);
+ do_wait = false;
+ }
+ }
+
+ if (do_wait)
+ nt_os_wait_usec(10);
+
+ count++;
+ do_wait = true;
+ }
+ }
+ }
+
+ return THREAD_RETURN;
+}
+
/*
* Adapter flm stat thread
*/
@@ -2235,6 +2355,18 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+
+ /* Port event thread */
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
+ port_event_thread_fn, (void *)internals);
+
+ if (res) {
+ NT_LOG(ERR, NTNIC, "%s: error=%d",
+ (pci_dev->name[0] ? pci_dev->name : "NA"), res);
+ return -1;
+ }
+ }
}
return 0;
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 65e7972c68..7325bd1ea8 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -290,6 +290,13 @@ struct profile_inline_ops {
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
+ /*
+ * Stats
+ */
+ int (*flow_get_flm_stats_profile_inline)(struct flow_nic_dev *ndev,
+ uint64_t *data,
+ uint64_t size);
+
/*
* NT Flow FLM queue API
*/
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 63/80] net/ntnic: add scrub registers
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (61 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 62/80] net/ntnic: added flow statistics Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 64/80] net/ntnic: add high-level flow aging support Serhii Iliushyk
` (16 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Scrub fields were added to the fpga map file
Remove duplicated macro
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../supported/nthw_fpga_9563_055_049_0000.c | 17 ++++++++++++++++-
drivers/net/ntnic/ntnic_ethdev.c | 3 ---
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
index 620968ceb6..f1033ca949 100644
--- a/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
+++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_9563_055_049_0000.c
@@ -728,7 +728,7 @@ static nthw_fpga_field_init_s flm_lrn_data_fields[] = {
{ FLM_LRN_DATA_PRIO, 2, 691, 0x0000 }, { FLM_LRN_DATA_PROT, 8, 320, 0x0000 },
{ FLM_LRN_DATA_QFI, 6, 704, 0x0000 }, { FLM_LRN_DATA_QW0, 128, 192, 0x0000 },
{ FLM_LRN_DATA_QW4, 128, 64, 0x0000 }, { FLM_LRN_DATA_RATE, 16, 416, 0x0000 },
- { FLM_LRN_DATA_RQI, 1, 710, 0x0000 },
+ { FLM_LRN_DATA_RQI, 1, 710, 0x0000 }, { FLM_LRN_DATA_SCRUB_PROF, 4, 712, 0x0000 },
{ FLM_LRN_DATA_SIZE, 16, 432, 0x0000 }, { FLM_LRN_DATA_STAT_PROF, 4, 687, 0x0000 },
{ FLM_LRN_DATA_SW8, 32, 32, 0x0000 }, { FLM_LRN_DATA_SW9, 32, 0, 0x0000 },
{ FLM_LRN_DATA_TEID, 32, 368, 0x0000 }, { FLM_LRN_DATA_VOL_IDX, 3, 684, 0x0000 },
@@ -782,6 +782,18 @@ static nthw_fpga_field_init_s flm_scan_fields[] = {
{ FLM_SCAN_I, 16, 0, 0 },
};
+static nthw_fpga_field_init_s flm_scrub_ctrl_fields[] = {
+ { FLM_SCRUB_CTRL_ADR, 4, 0, 0x0000 },
+ { FLM_SCRUB_CTRL_CNT, 16, 16, 0x0000 },
+};
+
+static nthw_fpga_field_init_s flm_scrub_data_fields[] = {
+ { FLM_SCRUB_DATA_DEL, 1, 12, 0 },
+ { FLM_SCRUB_DATA_INF, 1, 13, 0 },
+ { FLM_SCRUB_DATA_R, 4, 8, 0 },
+ { FLM_SCRUB_DATA_T, 8, 0, 0 },
+};
+
static nthw_fpga_field_init_s flm_status_fields[] = {
{ FLM_STATUS_CACHE_BUFFER_CRITICAL, 1, 12, 0x0000 },
{ FLM_STATUS_CALIB_FAIL, 3, 3, 0 },
@@ -921,6 +933,8 @@ static nthw_fpga_register_init_s flm_registers[] = {
{ FLM_RCP_CTRL, 8, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_rcp_ctrl_fields },
{ FLM_RCP_DATA, 9, 403, NTHW_FPGA_REG_TYPE_WO, 0, 19, flm_rcp_data_fields },
{ FLM_SCAN, 2, 16, NTHW_FPGA_REG_TYPE_WO, 0, 1, flm_scan_fields },
+ { FLM_SCRUB_CTRL, 10, 32, NTHW_FPGA_REG_TYPE_WO, 0, 2, flm_scrub_ctrl_fields },
+ { FLM_SCRUB_DATA, 11, 14, NTHW_FPGA_REG_TYPE_WO, 0, 4, flm_scrub_data_fields },
{ FLM_STATUS, 1, 17, NTHW_FPGA_REG_TYPE_MIXED, 0, 9, flm_status_fields },
{ FLM_STAT_AUL_DONE, 41, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_done_fields },
{ FLM_STAT_AUL_FAIL, 43, 32, NTHW_FPGA_REG_TYPE_RO, 0, 1, flm_stat_aul_fail_fields },
@@ -3058,6 +3072,7 @@ static nthw_fpga_prod_param_s product_parameters[] = {
{ NT_FLM_PRESENT, 1 },
{ NT_FLM_PRIOS, 4 },
{ NT_FLM_PST_PROFILES, 16 },
+ { NT_FLM_SCRUB_PROFILES, 16 },
{ NT_FLM_SIZE_MB, 12288 },
{ NT_FLM_STATEFUL, 1 },
{ NT_FLM_VARIANT, 2 },
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 4a0dafeff0..a212b3ab07 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -47,9 +47,6 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1)
#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1)
-/* Max RSS queues */
-#define MAX_QUEUES 125
-
#define NUM_VQ_SEGS(_data_size_) \
({ \
size_t _size = (_data_size_); \
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 64/80] net/ntnic: add high-level flow aging support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (62 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 63/80] net/ntnic: add scrub registers Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 65/80] net/ntnic: add aging to the inline profile Serhii Iliushyk
` (15 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
add flow aging functions to the ops structure
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 71 +++++++++++++++
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 88 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 21 +++++
3 files changed, 180 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 0e9fc33dec..3d65c0f3d0 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1014,6 +1014,70 @@ int flow_nic_set_hasher_fields(struct flow_nic_dev *ndev, int hsh_idx,
return profile_inline_ops->flow_nic_set_hasher_fields_inline(ndev, hsh_idx, rss_conf);
}
+static int flow_get_aged_flows(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline_ops uninitialized");
+ return -1;
+ }
+
+ if (nb_contexts > 0 && !context) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "rte_flow_get_aged_flows - empty context";
+ return -1;
+ }
+
+ return profile_inline_ops->flow_get_aged_flows_profile_inline(dev, caller_id, context,
+ nb_contexts, error);
+}
+
+static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_info;
+ (void)queue_info;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
+{
+ (void)dev;
+ (void)caller_id;
+ (void)port_attr;
+ (void)queue_attr;
+ (void)nb_queue;
+ (void)error;
+
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return 0;
+}
+
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1042,6 +1106,13 @@ static const struct flow_filter_ops ops = {
.flow_flush = flow_flush,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
+ .flow_get_aged_flows = flow_get_aged_flows,
+
+ /*
+ * NT Flow asynchronous operations API
+ */
+ .flow_info_get = flow_info_get,
+ .flow_configure = flow_configure,
/*
* Other
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index ef69064f98..6d65ffd38f 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -731,6 +731,91 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return res;
}
+static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ uint16_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ int res = flow_filter_ops->flow_get_aged_flows(internals->flw_dev, caller_id, context,
+ nb_contexts, &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+/*
+ * NT Flow asynchronous operations API
+ */
+
+static int eth_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_info_get(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (struct rte_flow_port_info *)port_info,
+ (struct rte_flow_queue_info *)queue_info,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
+static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = {
+ .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_configure(internals->flw_dev,
+ get_caller_id(dev->data->port_id),
+ (const struct rte_flow_port_attr *)port_attr,
+ nb_queue,
+ (const struct rte_flow_queue_attr **)queue_attr,
+ &flow_error);
+
+ convert_error(error, &flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -857,6 +942,9 @@ static const struct rte_flow_ops dev_flow_ops = {
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
.dev_dump = eth_flow_dev_dump,
+ .get_aged_flows = eth_flow_get_aged_flows,
+ .info_get = eth_flow_info_get,
+ .configure = eth_flow_configure,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 7325bd1ea8..52f197e873 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -286,6 +286,12 @@ struct profile_inline_ops {
FILE *file,
struct rte_flow_error *error);
+ int (*flow_get_aged_flows_profile_inline)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
@@ -355,6 +361,21 @@ struct flow_filter_ops {
int (*flow_nic_set_hasher_fields)(struct flow_nic_dev *ndev, int hsh_idx,
struct nt_eth_rss_conf rss_conf);
int (*hw_mod_hsh_rcp_flush)(struct flow_api_backend_s *be, int start_idx, int count);
+
+ int (*flow_get_aged_flows)(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
+ int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_dev_flow_ops(const struct rte_flow_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 65/80] net/ntnic: add aging to the inline profile
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (63 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 64/80] net/ntnic: add high-level flow aging support Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 66/80] net/ntnic: add flow info and flow configure support Serhii Iliushyk
` (14 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Added implementation for flow get aging.
Module which operate with age queue was extended with
get, count and size operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/meson.build | 1 +
.../flow_api/profile_inline/flm_age_queue.c | 49 ++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 24 +++++++++
.../profile_inline/flow_api_profile_inline.c | 51 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 6 +++
5 files changed, 131 insertions(+)
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
create mode 100644 drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index c0b7729929..8c6d02a5ec 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -58,6 +58,7 @@ sources = files(
'nthw/flow_api/flow_group.c',
'nthw/flow_api/flow_id_table.c',
'nthw/flow_api/hw_mod/hw_mod_backend.c',
+ 'nthw/flow_api/profile_inline/flm_age_queue.c',
'nthw/flow_api/profile_inline/flm_lrn_queue.c',
'nthw/flow_api/profile_inline/flm_evt_queue.c',
'nthw/flow_api/profile_inline/flow_api_profile_inline.c',
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
new file mode 100644
index 0000000000..f6f04009fe
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -0,0 +1,49 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#include <rte_ring.h>
+
+#include "ntlog.h"
+#include "flm_age_queue.h"
+
+/* Queues for flm aged events */
+static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sc_dequeue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue empty");
+
+ return ret;
+ }
+
+ return -ENOENT;
+}
+
+unsigned int flm_age_queue_count(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_count(age_queue[caller_id]);
+
+ return ret;
+}
+
+unsigned int flm_age_queue_get_size(uint16_t caller_id)
+{
+ unsigned int ret = 0;
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL)
+ ret = rte_ring_get_size(age_queue[caller_id]);
+
+ return ret;
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
new file mode 100644
index 0000000000..d61609cc01
--- /dev/null
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -0,0 +1,24 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Napatech A/S
+ */
+
+#ifndef _FLM_AGE_QUEUE_H_
+#define _FLM_AGE_QUEUE_H_
+
+#include "stdint.h"
+
+struct flm_age_event_s {
+ void *context;
+};
+
+/* Max number of event queues */
+#define MAX_EVT_AGE_QUEUES 256
+
+#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+
+int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
+unsigned int flm_age_queue_count(uint16_t caller_id);
+unsigned int flm_age_queue_get_size(uint16_t caller_id);
+
+#endif /* _FLM_AGE_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index c676e20601..5fe09a43a5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -7,6 +7,7 @@
#include "nt_util.h"
#include "hw_mod_backend.h"
+#include "flm_age_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -4390,6 +4391,55 @@ static void dump_flm_data(const uint32_t *data, FILE *file)
}
}
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ unsigned int queue_size = flm_age_queue_get_size(caller_id);
+
+ if (queue_size == 0) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size is not configured";
+ return -1;
+ }
+
+ unsigned int queue_count = flm_age_queue_count(caller_id);
+
+ if (context == NULL)
+ return queue_count;
+
+ if (queue_count < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Aged queue size contains fewer records than the expected output";
+ return -1;
+ }
+
+ if (queue_size < nb_contexts) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Defined aged queue size is smaller than the expected output";
+ return -1;
+ }
+
+ uint32_t idx;
+
+ for (idx = 0; idx < nb_contexts; ++idx) {
+ struct flm_age_event_s obj;
+ int ret = flm_age_queue_get(caller_id, &obj);
+
+ if (ret != 0)
+ break;
+
+ context[idx] = obj.context;
+ }
+
+ return idx;
+}
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -4520,6 +4570,7 @@ static const struct profile_inline_ops ops = {
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
+ .flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
/*
* Stats
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b44d3a7291..e1934bc6a6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -48,6 +48,12 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
FILE *file,
struct rte_flow_error *error);
+int flow_get_aged_flows_profile_inline(struct flow_eth_dev *dev,
+ uint16_t caller_id,
+ void **context,
+ uint32_t nb_contexts,
+ struct rte_flow_error *error);
+
int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 66/80] net/ntnic: add flow info and flow configure support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (64 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 65/80] net/ntnic: add aging to the inline profile Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 67/80] net/ntnic: add flow aging event Serhii Iliushyk
` (13 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with flow info and create.
Module which operate with age queue was extended with
create and free operations.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 3 +
drivers/net/ntnic/include/flow_api_engine.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 19 +----
.../flow_api/profile_inline/flm_age_queue.c | 77 +++++++++++++++++++
.../flow_api/profile_inline/flm_age_queue.h | 5 ++
.../profile_inline/flow_api_profile_inline.c | 62 ++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 9 +++
drivers/net/ntnic/ntnic_mod_reg.h | 9 +++
8 files changed, 169 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index ed96f77bc0..89f071d982 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -77,6 +77,9 @@ struct flow_eth_dev {
/* QSL_HSH index if RSS needed QSL v6+ */
int rss_target_id;
+ /* The size of buffer for aged out flow list */
+ uint32_t nb_aging_objects;
+
struct flow_eth_dev *next;
};
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 155a9e1fd6..604a896717 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -320,6 +320,7 @@ struct flow_handle {
uint32_t flm_teid;
uint8_t flm_rqi;
uint8_t flm_qfi;
+ uint8_t flm_scrub_prof;
};
};
};
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 3d65c0f3d0..76492902ad 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1041,12 +1041,6 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_info;
- (void)queue_info;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1054,20 +1048,14 @@ static int flow_info_get(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_info_get_profile_inline(dev, caller_id, port_info,
+ queue_info, error);
}
static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error)
{
- (void)dev;
- (void)caller_id;
- (void)port_attr;
- (void)queue_attr;
- (void)nb_queue;
- (void)error;
-
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
if (profile_inline_ops == NULL) {
@@ -1075,7 +1063,8 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
- return 0;
+ return profile_inline_ops->flow_configure_profile_inline(dev, caller_id, port_attr,
+ nb_queue, queue_attr, error);
}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index f6f04009fe..fbc947ee1d 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -4,12 +4,89 @@
*/
#include <rte_ring.h>
+#include <rte_errno.h>
#include "ntlog.h"
#include "flm_age_queue.h"
/* Queues for flm aged events */
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
+static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+
+void flm_age_queue_free(uint8_t port, uint16_t caller_id)
+{
+ struct rte_ring *q = NULL;
+
+ if (port < MAX_EVT_AGE_PORTS)
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ q = age_queue[caller_id];
+ age_queue[caller_id] = NULL;
+ }
+
+ if (q != NULL)
+ rte_ring_free(q);
+}
+
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
+{
+ char name[20];
+ struct rte_ring *q = NULL;
+
+ if (rte_is_power_of_2(count) == false || count > RTE_RING_SZ_MASK) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue number of elements (%u) is invalid, must be power of 2, and not exceed %u",
+ count,
+ RTE_RING_SZ_MASK);
+ return NULL;
+ }
+
+ if (port >= MAX_EVT_AGE_PORTS) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for port %u. Max supported port is %u",
+ port,
+ MAX_EVT_AGE_PORTS - 1);
+ return NULL;
+ }
+
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+
+ if (caller_id >= MAX_EVT_AGE_QUEUES) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created for caller_id %u. Max supported caller_id is %u",
+ caller_id,
+ MAX_EVT_AGE_QUEUES - 1);
+ return NULL;
+ }
+
+ if (age_queue[caller_id] != NULL) {
+ NT_LOG(DBG, FILTER, "FLM aged event queue %u already created", caller_id);
+ return age_queue[caller_id];
+ }
+
+ snprintf(name, 20, "AGE_EVENT%u", caller_id);
+ q = rte_ring_create_elem(name,
+ FLM_AGE_ELEM_SIZE,
+ count,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+
+ if (q == NULL) {
+ NT_LOG(WRN,
+ FILTER,
+ "FLM aged event queue cannot be created due to error %02X",
+ rte_errno);
+ return NULL;
+ }
+
+ age_queue[caller_id] = q;
+
+ return q;
+}
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index d61609cc01..9ff6ef6de0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -15,8 +15,13 @@ struct flm_age_event_s {
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
+/* Max number of event ports */
+#define MAX_EVT_AGE_PORTS 128
+
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5fe09a43a5..a147dd9fd0 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -490,7 +490,7 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->ft = fh->flm_ft;
learn_record->kid = fh->flm_kid;
learn_record->eor = 1;
- learn_record->scrub_prof = 0;
+ learn_record->scrub_prof = fh->flm_scrub_prof;
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
return 0;
@@ -2438,6 +2438,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
fh->flm_rpl_ext_ptr = rpl_ext_ptr;
fh->flm_prio = (uint8_t)priority;
fh->flm_ft = (uint8_t)flm_ft;
+ fh->flm_scrub_prof = (uint8_t)flm_scrub;
for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
switch (fd->modify_field[i].select) {
@@ -4554,6 +4555,63 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
+{
+ (void)queue_info;
+ (void)caller_id;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+ memset(port_info, 0, sizeof(struct rte_flow_port_info));
+
+ port_info->max_nb_aging_objects = dev->nb_aging_objects;
+
+ return res;
+}
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ (void)nb_queue;
+ (void)queue_attr;
+ int res = 0;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (port_attr->nb_aging_objects > 0) {
+ if (dev->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ struct rte_ring *age_queue =
+ flm_age_queue_create(dev->port_id, caller_id, port_attr->nb_aging_objects);
+
+ if (age_queue == NULL) {
+ error->message = "Failed to allocate aging objects";
+ goto error_out;
+ }
+
+ dev->nb_aging_objects = port_attr->nb_aging_objects;
+ }
+
+ return res;
+
+error_out:
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+
+ if (port_attr->nb_aging_objects > 0) {
+ flm_age_queue_free(dev->port_id, caller_id);
+ dev->nb_aging_objects = 0;
+ }
+
+ return -1;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -4575,6 +4633,8 @@ static const struct profile_inline_ops ops = {
* Stats
*/
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
+ .flow_info_get_profile_inline = flow_info_get_profile_inline,
+ .flow_configure_profile_inline = flow_configure_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index e1934bc6a6..ea1d9c31b2 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -64,4 +64,13 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info,
+ struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
+
+int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 52f197e873..15da911ca7 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -309,6 +309,15 @@ struct profile_inline_ops {
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
uint32_t (*flm_update)(struct flow_eth_dev *dev);
+
+ int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
+ struct rte_flow_error *error);
+
+ int (*flow_configure_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
+ const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 67/80] net/ntnic: add flow aging event
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (65 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 66/80] net/ntnic: add flow info and flow configure support Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 68/80] net/ntnic: add termination thread Serhii Iliushyk
` (12 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Port thread was extended with new age event callback handler.
LRN, INF, STA registers getter setter was added.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 19 ++
doc/guides/rel_notes/release_24_11.rst | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 13 ++
drivers/net/ntnic/include/hw_mod_backend.h | 11 ++
.../net/ntnic/nthw/flow_api/flow_id_table.c | 16 ++
.../net/ntnic/nthw/flow_api/flow_id_table.h | 3 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c | 158 +++++++++++++++
.../flow_api/profile_inline/flm_age_queue.c | 28 +++
.../flow_api/profile_inline/flm_age_queue.h | 12 ++
.../flow_api/profile_inline/flm_evt_queue.c | 20 ++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_hw_db_inline.c | 142 +++++++++++++-
.../profile_inline/flow_api_hw_db_inline.h | 84 ++++----
.../profile_inline/flow_api_profile_inline.c | 183 ++++++++++++++++++
.../flow_api_profile_inline_config.h | 21 +-
drivers/net/ntnic/ntnic_ethdev.c | 16 ++
17 files changed, 692 insertions(+), 37 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 947c7ba3a1..af2981ccf6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -33,6 +33,7 @@ udp = Y
vlan = Y
[rte_flow actions]
+age = Y
drop = Y
jump = Y
mark = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 47960ca3f1..806732e790 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -67,6 +67,7 @@ Features
- RMON statistics in extended stats.
- Link state information.
- Flow statistics
+- Flow aging support
Limitations
~~~~~~~~~~~
@@ -145,3 +146,21 @@ FILTER
To enable logging on all levels use wildcard in the following way::
--log-level=pmd.net.ntnic.*,8
+
+Flow Scanner
+ ------------
+
+ Flow Scanner is DPDK mechanism that constantly and periodically scans the RTE flow tables to check for aged-out flows.
+ When flow timeout is reached, i.e. no packets were matched by the flow within timeout period,
+ ``RTE_ETH_EVENT_FLOW_AGED`` event is reported, and flow is marked as aged-out.
+
+ Therefore, flow scanner functionality is closely connected to the RTE flows' ``age`` action.
+
+ There are list of characteristics that ``age timeout`` action has:
+ - functions only in group > 0;
+ - flow timeout is specified in seconds;
+ - flow scanner checks flows age timeout once in 1-480 seconds, therefore, flows may not age-out immediately, depedning on how big are intervals of flow scanner mechanism checks;
+ - aging counters can display maximum of **n - 1** aged flows when aging counters are set to **n**;
+ - overall 15 different timeouts can be specified for the flows at the same time (note that this limit is combined for all actions, therefore, 15 different actions can be created at the same time, maximum limit of 15 can be reached only across different groups - when 5 flows with different timeouts are created per one group, otherwise the limit within one group is 14 distinct flows);
+ - after flow is aged-out it's not automatically deleted;
+ - aged-out flow can be updated with ``flow update`` command, and its aged-out status will be reverted;
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 1b7e4ab3ae..8090c925fd 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -164,6 +164,7 @@ New Features
* Added flow handling support
* Enable virtual queues
* Added statistics support
+ * Added age rte flow action support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 604a896717..c75e7cff83 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -148,6 +148,14 @@ struct hsh_def_s {
const uint8_t *key; /* Hash key. */
};
+/*
+ * AGE configuration, see struct rte_flow_action_age
+ */
+struct age_def_s {
+ uint32_t timeout;
+ void *context;
+};
+
/*
* Tunnel encapsulation header definition
*/
@@ -264,6 +272,11 @@ struct nic_flow_def {
* Hash module RSS definitions
*/
struct hsh_def_s hsh;
+
+ /*
+ * AGE action timeout
+ */
+ struct age_def_s age;
};
enum flow_handle_type {
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 9cd9d92823..7a36e4c6d6 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -688,6 +688,9 @@ int hw_mod_flm_rcp_set_mask(struct flow_api_backend_s *be, enum hw_flm_e field,
int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
uint32_t value);
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be);
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be);
int hw_mod_flm_stat_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value);
@@ -695,8 +698,16 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
const uint32_t *value, uint32_t records,
uint32_t *handled_records, uint32_t *inf_word_cnt,
uint32_t *sta_word_cnt);
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt);
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc);
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t);
int hw_mod_flm_scrub_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value);
struct hsh_func_s {
COMMON_FUNC_INFO_S;
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index 5635ac4524..a3f5e1d7f7 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -129,3 +129,19 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
pthread_mutex_unlock(&handle->mtx);
}
+
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type)
+{
+ struct ntnic_id_table_data *handle = id_table;
+
+ pthread_mutex_lock(&handle->mtx);
+
+ struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
+
+ *caller_id = element->caller_id;
+ *type = element->type;
+ memcpy(flm_h, &element->handle, sizeof(union flm_handles));
+
+ pthread_mutex_unlock(&handle->mtx);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
index e190fe4a11..edb4f42729 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.h
@@ -20,4 +20,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
uint8_t type);
void ntnic_id_table_free_id(void *id_table, uint32_t id);
+void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
+ uint8_t *type);
+
#endif /* FLOW_ID_TABLE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
index 1845f74166..14dd95a150 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_flm.c
@@ -712,6 +712,52 @@ int hw_mod_flm_rcp_set(struct flow_api_backend_s *be, enum hw_flm_e field, int i
return hw_mod_flm_rcp_mod(be, field, index, &value, 0);
}
+
+int hw_mod_flm_buf_ctrl_update(struct flow_api_backend_s *be)
+{
+ return be->iface->flm_buf_ctrl_update(be->be_dev, &be->flm);
+}
+
+static int hw_mod_flm_buf_ctrl_mod_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *value)
+{
+ int get = 1; /* Only get supported */
+
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_BUF_CTRL_LRN_FREE:
+ GET_SET(be->flm.v25.buf_ctrl->lrn_free, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_INF_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->inf_avail, value);
+ break;
+
+ case HW_FLM_BUF_CTRL_STA_AVAIL:
+ GET_SET(be->flm.v25.buf_ctrl->sta_avail, value);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_buf_ctrl_get(struct flow_api_backend_s *be, enum hw_flm_e field, uint32_t *value)
+{
+ return hw_mod_flm_buf_ctrl_mod_get(be, field, value);
+}
+
int hw_mod_flm_stat_update(struct flow_api_backend_s *be)
{
return be->iface->flm_stat_update(be->be_dev, &be->flm);
@@ -887,3 +933,115 @@ int hw_mod_flm_lrn_data_set_flush(struct flow_api_backend_s *be, enum hw_flm_e f
return ret;
}
+
+int hw_mod_flm_inf_sta_data_update_get(struct flow_api_backend_s *be, enum hw_flm_e field,
+ uint32_t *inf_value, uint32_t inf_size,
+ uint32_t *inf_word_cnt, uint32_t *sta_value,
+ uint32_t sta_size, uint32_t *sta_word_cnt)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_FLOW_INF_STA_DATA:
+ be->iface->flm_inf_sta_data_update(be->be_dev, &be->flm, inf_value,
+ inf_size, inf_word_cnt, sta_value,
+ sta_size, sta_word_cnt);
+ break;
+
+ default:
+ UNSUP_FIELD_LOG;
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ UNSUP_VER_LOG;
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+/*
+ * SCRUB timeout support functions to encode users' input into FPGA 8-bit time format:
+ * Timeout in seconds (2^30 nanoseconds); zero means disabled. Value is:
+ *
+ * (T[7:3] != 0) ? ((8 + T[2:0]) shift-left (T[7:3] - 1)) : T[2:0]
+ *
+ * The maximum allowed value is 0xEF (127 years).
+ *
+ * Note that this represents a lower bound on the timeout, depending on the flow
+ * scanner interval and overall load, the timeout can be substantially longer.
+ */
+uint32_t hw_mod_flm_scrub_timeout_decode(uint32_t t_enc)
+{
+ uint8_t t_bits_2_0 = t_enc & 0x07;
+ uint8_t t_bits_7_3 = (t_enc >> 3) & 0x1F;
+ return t_bits_7_3 != 0 ? ((8 + t_bits_2_0) << (t_bits_7_3 - 1)) : t_bits_2_0;
+}
+
+uint32_t hw_mod_flm_scrub_timeout_encode(uint32_t t)
+{
+ uint32_t t_enc = 0;
+
+ if (t > 0) {
+ uint32_t t_dec = 0;
+
+ do {
+ t_enc++;
+ t_dec = hw_mod_flm_scrub_timeout_decode(t_enc);
+ } while (t_enc <= 0xEF && t_dec < t);
+ }
+
+ return t_enc;
+}
+
+static int hw_mod_flm_scrub_mod(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t *value, int get)
+{
+ switch (_VER_) {
+ case 25:
+ switch (field) {
+ case HW_FLM_SCRUB_PRESET_ALL:
+ if (get)
+ return UNSUP_FIELD;
+
+ memset(&be->flm.v25.scrub[index], (uint8_t)*value,
+ sizeof(struct flm_v25_scrub_s));
+ break;
+
+ case HW_FLM_SCRUB_T:
+ GET_SET(be->flm.v25.scrub[index].t, value);
+ break;
+
+ case HW_FLM_SCRUB_R:
+ GET_SET(be->flm.v25.scrub[index].r, value);
+ break;
+
+ case HW_FLM_SCRUB_DEL:
+ GET_SET(be->flm.v25.scrub[index].del, value);
+ break;
+
+ case HW_FLM_SCRUB_INF:
+ GET_SET(be->flm.v25.scrub[index].inf, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_flm_scrub_set(struct flow_api_backend_s *be, enum hw_flm_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_flm_scrub_mod(be, field, index, &value, 0);
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index fbc947ee1d..76bbd57f65 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -13,6 +13,21 @@
static struct rte_ring *age_queue[MAX_EVT_AGE_QUEUES];
static RTE_ATOMIC(uint16_t) age_event[MAX_EVT_AGE_PORTS];
+__rte_always_inline int flm_age_event_get(uint8_t port)
+{
+ return rte_atomic_load_explicit(&age_event[port], rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_set(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 1, rte_memory_order_seq_cst);
+}
+
+__rte_always_inline void flm_age_event_clear(uint8_t port)
+{
+ rte_atomic_store_explicit(&age_event[port], 0, rte_memory_order_seq_cst);
+}
+
void flm_age_queue_free(uint8_t port, uint16_t caller_id)
{
struct rte_ring *q = NULL;
@@ -88,6 +103,19 @@ struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned
return q;
}
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (caller_id < MAX_EVT_AGE_QUEUES && age_queue[caller_id] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(age_queue[caller_id], obj, FLM_AGE_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM aged event queue full");
+ }
+}
+
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 9ff6ef6de0..27154836c5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -12,6 +12,14 @@ struct flm_age_event_s {
void *context;
};
+/* Indicates why the flow info record was generated */
+#define INF_DATA_CAUSE_SW_UNLEARN 0
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED 1
+#define INF_DATA_CAUSE_NA 2
+#define INF_DATA_CAUSE_PERIODIC_FLOW_INFO 3
+#define INF_DATA_CAUSE_SW_PROBE 4
+#define INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT 5
+
/* Max number of event queues */
#define MAX_EVT_AGE_QUEUES 256
@@ -20,8 +28,12 @@ struct flm_age_event_s {
#define FLM_AGE_ELEM_SIZE sizeof(struct flm_age_event_s)
+int flm_age_event_get(uint8_t port);
+void flm_age_event_set(uint8_t port);
+void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
+void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
unsigned int flm_age_queue_count(uint16_t caller_id);
unsigned int flm_age_queue_get_size(uint16_t caller_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 98b0e8347a..db9687714f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -138,6 +138,26 @@ static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
return q;
}
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
+{
+ struct rte_ring **stat_q = remote ? stat_q_remote : stat_q_local;
+
+ if (port >= (remote ? MAX_STAT_RMT_QUEUES : MAX_STAT_LCL_QUEUES))
+ return -1;
+
+ if (stat_q[port] == NULL) {
+ if (flm_evt_queue_create(port, remote ? FLM_STAT_REMOTE : FLM_STAT_LOCAL) == NULL)
+ return -1;
+ }
+
+ if (rte_ring_sp_enqueue_elem(stat_q[port], obj, FLM_STAT_ELEM_SIZE) != 0) {
+ NT_LOG(DBG, FILTER, "FLM local status queue full");
+ return -1;
+ }
+
+ return 0;
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 238be7a3b2..3a61f844b6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,5 +48,6 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
+int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
#endif /* _FLM_EVT_QUEUE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index b5fee67e67..2fee6ae6b5 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -7,6 +7,7 @@
#include "flow_api_engine.h"
#include "flow_api_hw_db_inline.h"
+#include "flow_api_profile_inline_config.h"
#include "rte_common.h"
#define HW_DB_INLINE_ACTION_SET_NB 512
@@ -57,12 +58,18 @@ struct hw_db_inline_resource_db {
int ref;
} *hsh;
+ struct hw_db_inline_resource_db_scrub {
+ struct hw_db_inline_scrub_data data;
+ int ref;
+ } *scrub;
+
uint32_t nb_cot;
uint32_t nb_qsl;
uint32_t nb_slc_lr;
uint32_t nb_tpe;
uint32_t nb_tpe_ext;
uint32_t nb_hsh;
+ uint32_t nb_scrub;
/* Items */
struct hw_db_inline_resource_db_cat {
@@ -255,6 +262,14 @@ int hw_db_inline_create(struct flow_nic_dev *ndev, void **db_handle)
return -1;
}
+ db->nb_scrub = ndev->be.flm.nb_scrub_profiles;
+ db->scrub = calloc(db->nb_scrub, sizeof(struct hw_db_inline_resource_db_scrub));
+
+ if (db->scrub == NULL) {
+ hw_db_inline_destroy(db);
+ return -1;
+ }
+
*db_handle = db;
/* Preset data */
@@ -276,6 +291,7 @@ void hw_db_inline_destroy(void *db_handle)
free(db->tpe);
free(db->tpe_ext);
free(db->hsh);
+ free(db->scrub);
free(db->cat);
@@ -366,6 +382,11 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
hw_db_inline_hsh_deref(ndev, db_handle, *(struct hw_db_hsh_idx *)&idxs[i]);
break;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ hw_db_inline_scrub_deref(ndev, db_handle,
+ *(struct hw_db_flm_scrub_idx *)&idxs[i]);
+ break;
+
default:
break;
}
@@ -410,9 +431,9 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
else
fprintf(file,
- " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d\n",
+ " COT id %d, QSL id %d, SLC_LR id %d, TPE id %d, HSH id %d, SCRUB id %d\n",
data->cot.ids, data->qsl.ids, data->slc_lr.ids,
- data->tpe.ids, data->hsh.ids);
+ data->tpe.ids, data->hsh.ids, data->scrub.ids);
break;
}
@@ -577,6 +598,15 @@ void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct
break;
}
+ case HW_DB_IDX_TYPE_FLM_SCRUB: {
+ const struct hw_db_inline_scrub_data *data = &db->scrub[idxs[i].ids].data;
+ fprintf(file, " FLM_RCP %d\n", idxs[i].id1);
+ fprintf(file, " SCRUB %d\n", idxs[i].ids);
+ fprintf(file, " Timeout: %d, encoded timeout: %d\n",
+ hw_mod_flm_scrub_timeout_decode(data->timeout), data->timeout);
+ break;
+ }
+
case HW_DB_IDX_TYPE_HSH: {
const struct hw_db_inline_hsh_data *data = &db->hsh[idxs[i].ids].data;
fprintf(file, " HSH %d\n", idxs[i].ids);
@@ -690,6 +720,9 @@ const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
case HW_DB_IDX_TYPE_HSH:
return &db->hsh[idxs[i].ids].data;
+ case HW_DB_IDX_TYPE_FLM_SCRUB:
+ return &db->scrub[idxs[i].ids].data;
+
default:
return NULL;
}
@@ -1540,7 +1573,7 @@ static int hw_db_inline_action_set_compare(const struct hw_db_inline_action_set_
return data1->cot.raw == data2->cot.raw && data1->qsl.raw == data2->qsl.raw &&
data1->slc_lr.raw == data2->slc_lr.raw && data1->tpe.raw == data2->tpe.raw &&
- data1->hsh.raw == data2->hsh.raw;
+ data1->hsh.raw == data2->hsh.raw && data1->scrub.raw == data2->scrub.raw;
}
struct hw_db_action_set_idx
@@ -2849,3 +2882,106 @@ void hw_db_inline_hsh_deref(struct flow_nic_dev *ndev, void *db_handle, struct h
db->hsh[idx.ids].ref = 0;
}
}
+
+/******************************************************************************/
+/* FML SCRUB */
+/******************************************************************************/
+
+static int hw_db_inline_scrub_compare(const struct hw_db_inline_scrub_data *data1,
+ const struct hw_db_inline_scrub_data *data2)
+{
+ return data1->timeout == data2->timeout;
+}
+
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+ struct hw_db_flm_scrub_idx idx = { .raw = 0 };
+ int found = 0;
+
+ idx.type = HW_DB_IDX_TYPE_FLM_SCRUB;
+
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, i.e. flow will never AGE-out */
+ if (data->timeout == 0) {
+ idx.ids = 0;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ for (uint32_t i = 1; i < db->nb_scrub; ++i) {
+ int ref = db->scrub[i].ref;
+
+ if (ref > 0 && hw_db_inline_scrub_compare(data, &db->scrub[i].data)) {
+ idx.ids = i;
+ hw_db_inline_scrub_ref(ndev, db, idx);
+ return idx;
+ }
+
+ if (!found && ref <= 0) {
+ found = 1;
+ idx.ids = i;
+ }
+ }
+
+ if (!found) {
+ idx.error = 1;
+ return idx;
+ }
+
+ int res = hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, data->timeout);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_R, idx.ids,
+ NTNIC_SCANNER_TIMEOUT_RESOLUTION);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_DEL, idx.ids, SCRUB_DEL);
+ res |= hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_INF, idx.ids, SCRUB_INF);
+
+ if (res != 0) {
+ idx.error = 1;
+ return idx;
+ }
+
+ db->scrub[idx.ids].ref = 1;
+ memcpy(&db->scrub[idx.ids].data, data, sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, idx.ids);
+
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ return idx;
+}
+
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ (void)ndev;
+
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (!idx.error)
+ db->scrub[idx.ids].ref += 1;
+}
+
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx)
+{
+ struct hw_db_inline_resource_db *db = (struct hw_db_inline_resource_db *)db_handle;
+
+ if (idx.error)
+ return;
+
+ db->scrub[idx.ids].ref -= 1;
+
+ if (db->scrub[idx.ids].ref <= 0) {
+ /* NOTE: scrub id 0 is reserved for "default" timeout 0, which shall not be removed
+ */
+ if (idx.ids > 0) {
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_T, idx.ids, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, idx.ids, 1);
+
+ memset(&db->scrub[idx.ids].data, 0x0,
+ sizeof(struct hw_db_inline_scrub_data));
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, idx.ids);
+ }
+
+ db->scrub[idx.ids].ref = 0;
+ }
+}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index a9d31c86ea..c920d36cfd 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -117,6 +117,10 @@ struct hw_db_flm_ft {
HW_DB_IDX;
};
+struct hw_db_flm_scrub_idx {
+ HW_DB_IDX;
+};
+
struct hw_db_km_idx {
HW_DB_IDX;
};
@@ -145,6 +149,7 @@ enum hw_db_idx_type {
HW_DB_IDX_TYPE_FLM_RCP,
HW_DB_IDX_TYPE_KM_RCP,
HW_DB_IDX_TYPE_FLM_FT,
+ HW_DB_IDX_TYPE_FLM_SCRUB,
HW_DB_IDX_TYPE_KM_FT,
HW_DB_IDX_TYPE_HSH,
};
@@ -160,6 +165,43 @@ struct hw_db_inline_match_set_data {
uint8_t priority;
};
+struct hw_db_inline_action_set_data {
+ int contains_jump;
+ union {
+ int jump;
+ struct {
+ struct hw_db_cot_idx cot;
+ struct hw_db_qsl_idx qsl;
+ struct hw_db_slc_lr_idx slc_lr;
+ struct hw_db_tpe_idx tpe;
+ struct hw_db_hsh_idx hsh;
+ struct hw_db_flm_scrub_idx scrub;
+ };
+ };
+};
+
+struct hw_db_inline_km_rcp_data {
+ uint32_t rcp;
+};
+
+struct hw_db_inline_km_ft_data {
+ struct hw_db_cat_idx cat;
+ struct hw_db_km_idx km;
+ struct hw_db_action_set_idx action_set;
+};
+
+struct hw_db_inline_flm_ft_data {
+ /* Group zero flows should set jump. */
+ /* Group nonzero flows should set group. */
+ int is_group_zero;
+ union {
+ int jump;
+ int group;
+ };
+
+ struct hw_db_action_set_idx action_set;
+};
+
/* Functionality data types */
struct hw_db_inline_cat_data {
uint32_t vlan_mask : 4;
@@ -232,39 +274,8 @@ struct hw_db_inline_hsh_data {
uint8_t key[MAX_RSS_KEY_LEN];
};
-struct hw_db_inline_action_set_data {
- int contains_jump;
- union {
- int jump;
- struct {
- struct hw_db_cot_idx cot;
- struct hw_db_qsl_idx qsl;
- struct hw_db_slc_lr_idx slc_lr;
- struct hw_db_tpe_idx tpe;
- struct hw_db_hsh_idx hsh;
- };
- };
-};
-
-struct hw_db_inline_km_rcp_data {
- uint32_t rcp;
-};
-
-struct hw_db_inline_km_ft_data {
- struct hw_db_cat_idx cat;
- struct hw_db_km_idx km;
- struct hw_db_action_set_idx action_set;
-};
-
-struct hw_db_inline_flm_ft_data {
- /* Group zero flows should set jump. */
- /* Group nonzero flows should set group. */
- int is_group_zero;
- union {
- int jump;
- int group;
- };
- struct hw_db_action_set_idx action_set;
+struct hw_db_inline_scrub_data {
+ uint32_t timeout;
};
/**/
@@ -368,6 +379,13 @@ void hw_db_inline_flm_ft_ref(struct flow_nic_dev *ndev, void *db_handle, struct
void hw_db_inline_flm_ft_deref(struct flow_nic_dev *ndev, void *db_handle,
struct hw_db_flm_ft idx);
+struct hw_db_flm_scrub_idx hw_db_inline_scrub_add(struct flow_nic_dev *ndev, void *db_handle,
+ const struct hw_db_inline_scrub_data *data);
+void hw_db_inline_scrub_ref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+void hw_db_inline_scrub_deref(struct flow_nic_dev *ndev, void *db_handle,
+ struct hw_db_flm_scrub_idx idx);
+
int hw_db_inline_setup_mbr_filter(struct flow_nic_dev *ndev, uint32_t cat_hw_id, uint32_t ft,
uint32_t qsl_hw_id);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index a147dd9fd0..9f7b617e89 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -8,6 +8,7 @@
#include "hw_mod_backend.h"
#include "flm_age_queue.h"
+#include "flm_evt_queue.h"
#include "flm_lrn_queue.h"
#include "flow_api.h"
#include "flow_api_engine.h"
@@ -20,6 +21,13 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define DMA_BLOCK_SIZE 256
+#define DMA_OVERHEAD 20
+#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
+#define MAX_STA_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_STA_DATA)
+#define WORDS_PER_INF_DATA (sizeof(struct flm_v25_inf_data_s) / sizeof(uint32_t))
+#define MAX_INF_DATA_RECORDS_PER_READ ((DMA_BLOCK_SIZE - DMA_OVERHEAD) / WORDS_PER_INF_DATA)
+
#define NT_FLM_MISS_FLOW_TYPE 0
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
@@ -71,14 +79,127 @@ static uint32_t flm_lrn_update(struct flow_eth_dev *dev, uint32_t *inf_word_cnt,
return r.num;
}
+static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
+{
+ if (caller_id < MAX_VDPA_PORTS + 1) {
+ *port = caller_id;
+ return true;
+ }
+
+ *port = caller_id - MAX_VDPA_PORTS - 1;
+ return false;
+}
+
+static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_inf_data_s *inf_data =
+ (struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, inf_data->id, &flm_h, &caller_id,
+ &type);
+
+ /* Check that received record hold valid meter statistics */
+ if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
+
+ age_event.context = fh->context;
+
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
+ break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
+ }
+ }
+ }
+}
+
+static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
+{
+ for (uint32_t i = 0; i < records; ++i) {
+ struct flm_v25_sta_data_s *sta_data =
+ (struct flm_v25_sta_data_s *)&data[i * WORDS_PER_STA_DATA];
+ uint8_t caller_id;
+ uint8_t type;
+ union flm_handles flm_h;
+ ntnic_id_table_find(dev->ndev->id_table_handle, sta_data->id, &flm_h, &caller_id,
+ &type);
+
+ if (type == 1) {
+ uint8_t port;
+ bool remote_caller = is_remote_caller(caller_id, &port);
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+ ((struct flow_handle *)flm_h.p)->learn_ignored = 1;
+ pthread_mutex_unlock(&dev->ndev->mtx);
+ struct flm_status_event_s data = {
+ .flow = flm_h.p,
+ .learn_ignore = sta_data->lis,
+ .learn_failed = sta_data->lfs,
+ };
+
+ flm_sta_queue_put(port, remote_caller, &data);
+ }
+ }
+}
+
static uint32_t flm_update(struct flow_eth_dev *dev)
{
static uint32_t inf_word_cnt;
static uint32_t sta_word_cnt;
+ uint32_t inf_data[DMA_BLOCK_SIZE];
+ uint32_t sta_data[DMA_BLOCK_SIZE];
+
+ if (inf_word_cnt >= WORDS_PER_INF_DATA || sta_word_cnt >= WORDS_PER_STA_DATA) {
+ uint32_t inf_records = inf_word_cnt / WORDS_PER_INF_DATA;
+
+ if (inf_records > MAX_INF_DATA_RECORDS_PER_READ)
+ inf_records = MAX_INF_DATA_RECORDS_PER_READ;
+
+ uint32_t sta_records = sta_word_cnt / WORDS_PER_STA_DATA;
+
+ if (sta_records > MAX_STA_DATA_RECORDS_PER_READ)
+ sta_records = MAX_STA_DATA_RECORDS_PER_READ;
+
+ hw_mod_flm_inf_sta_data_update_get(&dev->ndev->be, HW_FLM_FLOW_INF_STA_DATA,
+ inf_data, inf_records * WORDS_PER_INF_DATA,
+ &inf_word_cnt, sta_data,
+ sta_records * WORDS_PER_STA_DATA,
+ &sta_word_cnt);
+
+ if (inf_records > 0)
+ flm_mtr_read_inf_records(dev, inf_data, inf_records);
+
+ if (sta_records > 0)
+ flm_mtr_read_sta_records(dev, sta_data, sta_records);
+
+ return 1;
+ }
+
if (flm_lrn_update(dev, &inf_word_cnt, &sta_word_cnt) != 0)
return 1;
+ hw_mod_flm_buf_ctrl_update(&dev->ndev->be);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_INF_AVAIL, &inf_word_cnt);
+ hw_mod_flm_buf_ctrl_get(&dev->ndev->be, HW_FLM_BUF_CTRL_STA_AVAIL, &sta_word_cnt);
+
return inf_word_cnt + sta_word_cnt;
}
@@ -1067,6 +1188,25 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_AGE:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_AGE", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_age age_tmp;
+ const struct rte_flow_action_age *age =
+ memcpy_mask_if(&age_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_age));
+ fd->age.timeout = hw_mod_flm_scrub_timeout_encode(age->timeout);
+ fd->age.context = age->context;
+ NT_LOG(DBG, FILTER,
+ "normalized timeout: %u, original timeout: %u, context: %p",
+ hw_mod_flm_scrub_timeout_decode(fd->age.timeout),
+ age->timeout, fd->age.context);
+ }
+
+ break;
+
default:
NT_LOG(ERR, FILTER, "Invalid or unsupported flow action received - %i",
action[aidx].type);
@@ -2466,6 +2606,7 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
break;
}
}
+ fh->context = fd->age.context;
}
static int convert_fh_to_fh_flm(struct flow_handle *fh, const uint32_t *packet_data,
@@ -2722,6 +2863,21 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
return -1;
}
+ /* Setup SCRUB profile */
+ struct hw_db_inline_scrub_data scrub_data = { .timeout = fd->age.timeout };
+ struct hw_db_flm_scrub_idx scrub_idx =
+ hw_db_inline_scrub_add(dev->ndev, dev->ndev->hw_db_handle, &scrub_data);
+ local_idxs[(*local_idx_counter)++] = scrub_idx.raw;
+
+ if (scrub_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM SCRUB resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ return -1;
+ }
+
+ if (flm_scrub)
+ *flm_scrub = scrub_idx.ids;
+
/* Setup Action Set */
struct hw_db_inline_action_set_data action_set_data = {
.contains_jump = 0,
@@ -2730,6 +2886,7 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
.slc_lr = slc_lr_idx,
.tpe = tpe_idx,
.hsh = hsh_idx,
+ .scrub = scrub_idx,
};
struct hw_db_action_set_idx action_set_idx =
hw_db_inline_action_set_add(dev->ndev, dev->ndev->hw_db_handle, &action_set_data);
@@ -2796,6 +2953,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
goto error_out;
}
+ fh->context = fd->age.context;
nic_insert_flow(dev->ndev, fh);
} else if (attr->group > 0) {
@@ -2852,6 +3010,18 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
*/
int identical_km_entry_ft = -1;
+ /* Setup Action Set */
+
+ /* SCRUB/AGE action is not supported for group 0 */
+ if (fd->age.timeout != 0 || fd->age.context != NULL) {
+ NT_LOG(ERR, FILTER, "Action AGE is not supported for flow in group 0");
+ flow_nic_set_error(ERR_ACTION_AGE_UNSUPPORTED_GROUP_0, error);
+ goto error_out;
+ }
+
+ /* NOTE: SCRUB record 0 is used by default with timeout 0, i.e. flow will never
+ * AGE-out
+ */
struct hw_db_inline_action_set_data action_set_data = { 0 };
(void)action_set_data;
@@ -3344,6 +3514,15 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_mark_resource_used(ndev, RES_HSH_RCP, 0);
+ /* Initialize SCRUB with default index 0, i.e. flow will never AGE-out */
+ if (hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0) < 0)
+ goto err_exit0;
+
+ if (hw_mod_flm_scrub_flush(&ndev->be, 0, 1) < 0)
+ goto err_exit0;
+
+ flow_nic_mark_resource_used(ndev, RES_SCRUB_RCP, 0);
+
/* Setup filter using matching all packets violating traffic policing parameters */
flow_nic_mark_resource_used(ndev, RES_CAT_CFN, NT_VIOLATING_MBR_CFN);
flow_nic_mark_resource_used(ndev, RES_QSL_RCP, NT_VIOLATING_MBR_QSL);
@@ -3479,6 +3658,10 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
hw_mod_hsh_rcp_flush(&ndev->be, 0, 1);
flow_nic_free_resource(ndev, RES_HSH_RCP, 0);
+ hw_mod_flm_scrub_set(&ndev->be, HW_FLM_SCRUB_PRESET_ALL, 0, 0);
+ hw_mod_flm_scrub_flush(&ndev->be, 0, 1);
+ flow_nic_free_resource(ndev, RES_SCRUB_RCP, 0);
+
hw_db_inline_destroy(ndev->hw_db_handle);
#ifdef FLOW_DEBUG
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
index 8ba8b8f67a..3b53288ddf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -55,4 +55,23 @@
*/
#define NTNIC_SCANNER_LOAD 0.01
-#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
+/*
+ * This define sets the timeout resolution of aged flow scanner (scrubber).
+ *
+ * The timeout resolution feature is provided in order to reduce the number of
+ * write-back operations for flows without attached meter. If the resolution
+ * is disabled (set to 0) and flow timeout is enabled via age action, then a write-back
+ * occurs every the flow is evicted from the flow cache, essentially causing the
+ * lookup performance to drop to that of a flow with meter. By setting the timeout
+ * resolution (>0), write-back for flows happens only when the difference between
+ * the last recorded time for the flow and the current time exceeds the chosen resolution.
+ *
+ * The parameter value is a power of 2 in units of 2^28 nanoseconds. It means that value 8 sets
+ * the timeout resolution to: 2^8 * 2^28 / 1e9 = 68,7 seconds
+ *
+ * NOTE: This parameter has a significant impact on flow lookup performance, especially
+ * if full scanner timeout resolution (=0) is configured.
+ */
+#define NTNIC_SCANNER_TIMEOUT_RESOLUTION 8
+
+#endif /* _FLOW_API_PROFILE_INLINE_CONFIG_H_ */
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index a212b3ab07..e0f455dc1b 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -26,6 +26,7 @@
#include "ntnic_vfio.h"
#include "ntnic_mod_reg.h"
#include "nt_util.h"
+#include "profile_inline/flm_age_queue.h"
#include "profile_inline/flm_evt_queue.h"
#include "rte_pmd_ntnic.h"
@@ -1814,6 +1815,21 @@ THREAD_FUNC port_event_thread_fn(void *context)
}
}
+ /* AGED event */
+ /* Note: RTE_FLOW_PORT_FLAG_STRICT_QUEUE flag is not supported so
+ * event is always generated
+ */
+ int aged_event_count = flm_age_event_get(port_no);
+
+ if (aged_event_count > 0 && eth_dev && eth_dev->data &&
+ eth_dev->data->dev_private) {
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_FLOW_AGED,
+ NULL);
+ flm_age_event_clear(port_no);
+ do_wait = false;
+ }
+
if (do_wait)
nt_os_wait_usec(10);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 68/80] net/ntnic: add termination thread
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (66 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 67/80] net/ntnic: add flow aging event Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 69/80] net/ntnic: add meter support Serhii Iliushyk
` (11 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Introduce clear_pdrv to unregister driver
from global tracking.
Modify drv_deinit to call clear_pdirv and ensure
safe termination.
Add flm sta and age event free.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
.../flow_api/profile_inline/flm_age_queue.c | 10 +++
.../flow_api/profile_inline/flm_age_queue.h | 1 +
.../flow_api/profile_inline/flm_evt_queue.c | 76 +++++++++++++++++++
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
drivers/net/ntnic/ntnic_ethdev.c | 6 ++
5 files changed, 94 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
index 76bbd57f65..d916eccec7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.c
@@ -44,6 +44,16 @@ void flm_age_queue_free(uint8_t port, uint16_t caller_id)
rte_ring_free(q);
}
+void flm_age_queue_free_all(void)
+{
+ int i;
+ int j;
+
+ for (i = 0; i < MAX_EVT_AGE_PORTS; i++)
+ for (j = 0; j < MAX_EVT_AGE_QUEUES; j++)
+ flm_age_queue_free(i, j);
+}
+
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count)
{
char name[20];
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
index 27154836c5..55c410ac86 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_age_queue.h
@@ -32,6 +32,7 @@ int flm_age_event_get(uint8_t port);
void flm_age_event_set(uint8_t port);
void flm_age_event_clear(uint8_t port);
void flm_age_queue_free(uint8_t port, uint16_t caller_id);
+void flm_age_queue_free_all(void);
struct rte_ring *flm_age_queue_create(uint8_t port, uint16_t caller_id, unsigned int count);
void flm_age_queue_put(uint16_t caller_id, struct flm_age_event_s *obj);
int flm_age_queue_get(uint16_t caller_id, struct flm_age_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index db9687714f..761609a0ea 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -25,6 +25,82 @@ static struct rte_ring *stat_q_local[MAX_STAT_LCL_QUEUES];
/* Remote queues for flm status records */
static struct rte_ring *stat_q_remote[MAX_STAT_RMT_QUEUES];
+static void flm_inf_sta_queue_free(uint8_t port, uint8_t caller)
+{
+ struct rte_ring *q = NULL;
+
+ /* If queues is not created, then ignore and return */
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ q = info_q_local[port];
+ info_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_INFO_REMOTE:
+ if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ q = info_q_remote[port];
+ info_q_remote[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_LOCAL:
+ if (port < MAX_STAT_LCL_QUEUES && stat_q_local[port] != NULL) {
+ q = stat_q_local[port];
+ stat_q_local[port] = NULL;
+ }
+
+ break;
+
+ case FLM_STAT_REMOTE:
+ if (port < MAX_STAT_RMT_QUEUES && stat_q_remote[port] != NULL) {
+ q = stat_q_remote[port];
+ stat_q_remote[port] = NULL;
+ }
+
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ break;
+ }
+
+ if (q)
+ rte_ring_free(q);
+}
+
+void flm_inf_sta_queue_free_all(uint8_t caller)
+{
+ int count = 0;
+
+ switch (caller) {
+ case FLM_INFO_LOCAL:
+ count = MAX_INFO_LCL_QUEUES;
+ break;
+
+ case FLM_INFO_REMOTE:
+ count = MAX_INFO_RMT_QUEUES;
+ break;
+
+ case FLM_STAT_LOCAL:
+ count = MAX_STAT_LCL_QUEUES;
+ break;
+
+ case FLM_STAT_REMOTE:
+ count = MAX_STAT_RMT_QUEUES;
+ break;
+
+ default:
+ NT_LOG(ERR, FILTER, "FLM queue free illegal caller: %u", caller);
+ return;
+ }
+
+ for (int i = 0; i < count; i++)
+ flm_inf_sta_queue_free(i, caller);
+}
static struct rte_ring *flm_evt_queue_create(uint8_t port, uint8_t caller)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index 3a61f844b6..d61b282472 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -47,6 +47,7 @@ enum {
#define FLM_EVT_ELEM_SIZE sizeof(struct flm_info_event_s)
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
+void flm_inf_sta_queue_free_all(uint8_t caller);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index e0f455dc1b..cdf5c346b7 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1420,6 +1420,12 @@ drv_deinit(struct drv_s *p_drv)
THREAD_JOIN(p_nt_drv->flm_thread);
profile_inline_ops->flm_free_queues();
THREAD_JOIN(p_nt_drv->port_event_thread);
+ /* Free all local flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_LOCAL);
+ /* Free all remote flm event queues */
+ flm_inf_sta_queue_free_all(FLM_INFO_REMOTE);
+ /* Free all aged flow event queues */
+ flm_age_queue_free_all();
}
/* stop adapter */
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 69/80] net/ntnic: add meter support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (67 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 68/80] net/ntnic: add termination thread Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 70/80] net/ntnic: add meter module Serhii Iliushyk
` (10 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Add meter implementation to the profile inline.
management functions were extended with meter flow support.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
drivers/net/ntnic/include/flow_api.h | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 5 +
.../flow_api/profile_inline/flm_evt_queue.c | 21 +
.../flow_api/profile_inline/flm_evt_queue.h | 1 +
.../profile_inline/flow_api_profile_inline.c | 560 +++++++++++++++++-
drivers/net/ntnic/ntnic_mod_reg.h | 27 +
9 files changed, 600 insertions(+), 18 deletions(-)
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index af2981ccf6..e2de6d15f6 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -37,6 +37,7 @@ age = Y
drop = Y
jump = Y
mark = Y
+meter = Y
modify_field = Y
port_id = Y
queue = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index 806732e790..bf5743f196 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -68,6 +68,7 @@ Features
- Link state information.
- Flow statistics
- Flow aging support
+- Flow metering, including meter policy API.
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 8090c925fd..76d5efc97c 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -165,6 +165,7 @@ New Features
* Enable virtual queues
* Added statistics support
* Added age rte flow action support
+ * Added meter flow metering and flow policy support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 89f071d982..032063712a 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -100,6 +100,7 @@ struct flow_nic_dev {
void *km_res_handle;
void *kcc_res_handle;
+ void *flm_mtr_handle;
void *group_handle;
void *hw_db_handle;
void *id_table_handle;
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index c75e7cff83..b40a27fbf1 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -57,6 +57,7 @@ enum res_type_e {
#define MAX_TCAM_START_OFFSETS 4
+#define MAX_FLM_MTRS_SUPPORTED 4
#define MAX_CPY_WRITERS_SUPPORTED 8
#define MAX_MATCH_FIELDS 16
@@ -223,6 +224,8 @@ struct nic_flow_def {
uint32_t jump_to_group;
+ uint32_t mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
int full_offload;
/*
@@ -320,6 +323,8 @@ struct flow_handle {
uint32_t flm_db_idx_counter;
uint32_t flm_db_idxs[RES_COUNT];
+ uint32_t flm_mtr_ids[MAX_FLM_MTRS_SUPPORTED];
+
uint32_t flm_data[10];
uint8_t flm_prot;
uint8_t flm_kid;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
index 761609a0ea..d76c7da568 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.c
@@ -234,6 +234,27 @@ int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj)
return 0;
}
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj)
+{
+ int ret;
+
+ /* If queues is not created, then ignore and return */
+ if (!remote) {
+ if (port < MAX_INFO_LCL_QUEUES && info_q_local[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_local[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM local info queue full");
+ }
+
+ } else if (port < MAX_INFO_RMT_QUEUES && info_q_remote[port] != NULL) {
+ ret = rte_ring_sp_enqueue_elem(info_q_remote[port], obj, FLM_EVT_ELEM_SIZE);
+
+ if (ret != 0)
+ NT_LOG(DBG, FILTER, "FLM remote info queue full");
+ }
+}
+
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj)
{
int ret;
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
index d61b282472..ee8175cf25 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flm_evt_queue.h
@@ -48,6 +48,7 @@ enum {
#define FLM_STAT_ELEM_SIZE sizeof(struct flm_status_event_s)
void flm_inf_sta_queue_free_all(uint8_t caller);
+void flm_inf_queue_put(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_inf_queue_get(uint8_t port, bool remote, struct flm_info_event_s *obj);
int flm_sta_queue_put(uint8_t port, bool remote, struct flm_status_event_s *obj);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9f7b617e89..189bdf01d6 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -21,6 +21,10 @@
#include "ntnic_mod_reg.h"
#include <rte_common.h>
+#define FLM_MTR_PROFILE_SIZE 0x100000
+#define FLM_MTR_STAT_SIZE 0x1000000
+#define UINT64_MSB ((uint64_t)1 << 63)
+
#define DMA_BLOCK_SIZE 256
#define DMA_OVERHEAD 20
#define WORDS_PER_STA_DATA (sizeof(struct flm_v25_sta_data_s) / sizeof(uint32_t))
@@ -46,8 +50,336 @@
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_MISS_FLOW_TYPE 0
+#define NT_FLM_UNHANDLED_FLOW_TYPE 1
+#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
+
+#define NT_VIOLATING_MBR_CFN 0
+#define NT_VIOLATING_MBR_QSL 1
+
+#define POLICING_PARAMETER_OFFSET 4096
+#define SIZE_CONVERTER 1099.511627776
+
+struct flm_mtr_stat_s {
+ struct dual_buckets_s *buckets;
+ atomic_uint_fast64_t n_pkt;
+ atomic_uint_fast64_t n_bytes;
+ uint64_t n_pkt_base;
+ uint64_t n_bytes_base;
+ atomic_uint_fast64_t stats_mask;
+ uint32_t flm_id;
+};
+
+struct flm_mtr_shared_stats_s {
+ struct flm_mtr_stat_s *stats;
+ uint32_t size;
+ int shared;
+};
+
+struct flm_flow_mtr_handle_s {
+ struct dual_buckets_s {
+ uint16_t rate_a;
+ uint16_t rate_b;
+ uint16_t size_a;
+ uint16_t size_b;
+ } dual_buckets[FLM_MTR_PROFILE_SIZE];
+
+ struct flm_mtr_shared_stats_s *port_stats[UINT8_MAX];
+};
+
static void *flm_lrn_queue_arr;
+static int flow_mtr_supported(struct flow_eth_dev *dev)
+{
+ return hw_mod_flm_present(&dev->ndev->be) && dev->ndev->be.flm.nb_variant == 2;
+}
+
+static uint64_t flow_mtr_meter_policy_n_max(void)
+{
+ return FLM_MTR_PROFILE_SIZE;
+}
+
+static inline uint64_t convert_policing_parameter(uint64_t value)
+{
+ uint64_t limit = POLICING_PARAMETER_OFFSET;
+ uint64_t shift = 0;
+ uint64_t res = value;
+
+ while (shift < 15 && value >= limit) {
+ limit <<= 1;
+ ++shift;
+ }
+
+ if (shift != 0) {
+ uint64_t tmp = POLICING_PARAMETER_OFFSET * (1 << (shift - 1));
+
+ if (tmp > value) {
+ res = 0;
+
+ } else {
+ tmp = value - tmp;
+ res = tmp >> (shift - 1);
+ }
+
+ if (res >= POLICING_PARAMETER_OFFSET)
+ res = POLICING_PARAMETER_OFFSET - 1;
+
+ res = res | (shift << 12);
+ }
+
+ return res;
+}
+
+static int flow_mtr_set_profile(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a, uint64_t bucket_rate_b,
+ uint64_t bucket_size_b)
+{
+ struct flow_nic_dev *ndev = dev->ndev;
+ struct flm_flow_mtr_handle_s *handle =
+ (struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle;
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ /* Round rates up to nearest 128 bytes/sec and shift to 128 bytes/sec units */
+ bucket_rate_a = (bucket_rate_a + 127) >> 7;
+ bucket_rate_b = (bucket_rate_b + 127) >> 7;
+
+ buckets->rate_a = convert_policing_parameter(bucket_rate_a);
+ buckets->rate_b = convert_policing_parameter(bucket_rate_b);
+
+ /* Round size down to 38-bit int */
+ if (bucket_size_a > 0x3fffffffff)
+ bucket_size_a = 0x3fffffffff;
+
+ if (bucket_size_b > 0x3fffffffff)
+ bucket_size_b = 0x3fffffffff;
+
+ /* Convert size to units of 2^40 / 10^9. Output is a 28-bit int. */
+ bucket_size_a = bucket_size_a / SIZE_CONVERTER;
+ bucket_size_b = bucket_size_b / SIZE_CONVERTER;
+
+ buckets->size_a = convert_policing_parameter(bucket_size_a);
+ buckets->size_b = convert_policing_parameter(bucket_size_b);
+
+ return 0;
+}
+
+static int flow_mtr_set_policy(struct flow_eth_dev *dev, uint32_t policy_id, int drop)
+{
+ (void)dev;
+ (void)policy_id;
+ (void)drop;
+ return 0;
+}
+
+static uint32_t flow_mtr_meters_supported(struct flow_eth_dev *dev, uint8_t caller_id)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ return handle->port_stats[caller_id]->size;
+}
+
+static int flow_mtr_create_meter(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t mtr_id,
+ uint32_t profile_id,
+ uint32_t policy_id,
+ uint64_t stats_mask)
+{
+ (void)policy_id;
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct dual_buckets_s *buckets = &handle->dual_buckets[profile_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ union flm_handles flm_h;
+ flm_h.idx = mtr_id;
+ uint32_t flm_id = ntnic_id_table_get_id(dev->ndev->id_table_handle, flm_h, caller_id, 2);
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = buckets->rate_a;
+ learn_record->size = buckets->size_a;
+ learn_record->fill = buckets->size_a;
+
+ learn_record->ft_mbr =
+ NT_FLM_VIOLATING_MBR_FLOW_TYPE; /* FT to assign if MBR has been exceeded */
+
+ learn_record->ent = 1;
+ learn_record->op = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ if (stats_mask)
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ mtr_stat[mtr_id].buckets = buckets;
+ mtr_stat[mtr_id].flm_id = flm_id;
+ atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 3;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ uint32_t flm_id = mtr_stat[mtr_id].flm_id;
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = flm_id;
+ learn_record->kid = 1;
+
+ learn_record->ent = 1;
+ learn_record->op = 0;
+ /* Suppress generation of statistics INF_DATA */
+ learn_record->nofi = 1;
+ learn_record->eor = 1;
+
+ learn_record->id = flm_id;
+
+ /* Clear statistics so stats_mask prevents updates of counters on deleted meters */
+ atomic_store(&mtr_stat[mtr_id].stats_mask, 0);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, 0);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, 0);
+ mtr_stat[mtr_id].n_bytes_base = 0;
+ mtr_stat[mtr_id].n_pkt_base = 0;
+ mtr_stat[mtr_id].buckets = NULL;
+
+ ntnic_id_table_free_id(dev->ndev->id_table_handle, flm_id);
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
+static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value)
+{
+ struct flm_v25_lrn_data_s *learn_record = NULL;
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+
+ while (learn_record == NULL) {
+ nt_os_wait_usec(1);
+ learn_record =
+ (struct flm_v25_lrn_data_s *)
+ flm_lrn_queue_get_write_buffer(flm_lrn_queue_arr);
+ }
+
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
+ struct flm_mtr_stat_s *mtr_stat = &handle->port_stats[caller_id]->stats[mtr_id];
+
+ memset(learn_record, 0x0, sizeof(struct flm_v25_lrn_data_s));
+
+ learn_record->sw9 = mtr_stat->flm_id;
+ learn_record->kid = 1;
+
+ learn_record->rate = mtr_stat->buckets->rate_a;
+ learn_record->size = mtr_stat->buckets->size_a;
+ learn_record->adj = adjust_value;
+
+ learn_record->ft_mbr = NT_FLM_VIOLATING_MBR_FLOW_TYPE;
+
+ learn_record->ent = 1;
+ learn_record->op = 2;
+ learn_record->eor = 1;
+
+ if (atomic_load(&mtr_stat->stats_mask))
+ learn_record->vol_idx = 1;
+
+ flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ return 0;
+}
+
static void flm_setup_queues(void)
{
flm_lrn_queue_arr = flm_lrn_queue_create();
@@ -92,6 +424,8 @@ static inline bool is_remote_caller(uint8_t caller_id, uint8_t *port)
static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, uint32_t records)
{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+
for (uint32_t i = 0; i < records; ++i) {
struct flm_v25_inf_data_s *inf_data =
(struct flm_v25_inf_data_s *)&data[i * WORDS_PER_INF_DATA];
@@ -102,29 +436,62 @@ static void flm_mtr_read_inf_records(struct flow_eth_dev *dev, uint32_t *data, u
&type);
/* Check that received record hold valid meter statistics */
- if (type == 1) {
- switch (inf_data->cause) {
- case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
- case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
- struct flow_handle *fh = (struct flow_handle *)flm_h.p;
- struct flm_age_event_s age_event;
- uint8_t port;
+ if (type == 2) {
+ uint64_t mtr_id = flm_h.idx;
+
+ if (mtr_id < handle->port_stats[caller_id]->size) {
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[caller_id]->stats;
+
+ /* Don't update a deleted meter */
+ uint64_t stats_mask = atomic_load(&mtr_stat[mtr_id].stats_mask);
+
+ if (stats_mask) {
+ atomic_store(&mtr_stat[mtr_id].n_pkt,
+ inf_data->packets | UINT64_MSB);
+ atomic_store(&mtr_stat[mtr_id].n_bytes, inf_data->bytes);
+ atomic_store(&mtr_stat[mtr_id].n_pkt, inf_data->packets);
+ struct flm_info_event_s stat_data;
+ bool remote_caller;
+ uint8_t port;
+
+ remote_caller = is_remote_caller(caller_id, &port);
+
+ /* Save stat data to flm stat queue */
+ stat_data.bytes = inf_data->bytes;
+ stat_data.packets = inf_data->packets;
+ stat_data.id = mtr_id;
+ stat_data.timestamp = inf_data->ts;
+ stat_data.cause = inf_data->cause;
+ flm_inf_queue_put(port, remote_caller, &stat_data);
+ }
+ }
- age_event.context = fh->context;
+ /* Check that received record hold valid flow data */
- is_remote_caller(caller_id, &port);
+ } else if (type == 1) {
+ switch (inf_data->cause) {
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_DELETED:
+ case INF_DATA_CAUSE_TIMEOUT_FLOW_KEPT: {
+ struct flow_handle *fh = (struct flow_handle *)flm_h.p;
+ struct flm_age_event_s age_event;
+ uint8_t port;
- flm_age_queue_put(caller_id, &age_event);
- flm_age_event_set(port);
- }
- break;
+ age_event.context = fh->context;
- case INF_DATA_CAUSE_SW_UNLEARN:
- case INF_DATA_CAUSE_NA:
- case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
- case INF_DATA_CAUSE_SW_PROBE:
- default:
+ is_remote_caller(caller_id, &port);
+
+ flm_age_queue_put(caller_id, &age_event);
+ flm_age_event_set(port);
+ }
break;
+
+ case INF_DATA_CAUSE_SW_UNLEARN:
+ case INF_DATA_CAUSE_NA:
+ case INF_DATA_CAUSE_PERIODIC_FLOW_INFO:
+ case INF_DATA_CAUSE_SW_PROBE:
+ default:
+ break;
}
}
}
@@ -203,6 +570,42 @@ static uint32_t flm_update(struct flow_eth_dev *dev)
return inf_word_cnt + sta_word_cnt;
}
+static void flm_mtr_read_stats(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear)
+{
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[caller_id]->stats;
+ *stats_mask = atomic_load(&mtr_stat[id].stats_mask);
+
+ if (*stats_mask) {
+ uint64_t pkt_1;
+ uint64_t pkt_2;
+ uint64_t nb;
+
+ do {
+ do {
+ pkt_1 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 & UINT64_MSB);
+
+ nb = atomic_load(&mtr_stat[id].n_bytes);
+ pkt_2 = atomic_load(&mtr_stat[id].n_pkt);
+ } while (pkt_1 != pkt_2);
+
+ *green_pkt = pkt_1 - mtr_stat[id].n_pkt_base;
+ *green_bytes = nb - mtr_stat[id].n_bytes_base;
+
+ if (clear) {
+ mtr_stat[id].n_pkt_base = pkt_1;
+ mtr_stat[id].n_bytes_base = nb;
+ }
+ }
+}
+
static int rx_queue_idx_to_hw_id(const struct flow_eth_dev *dev, int id)
{
for (int i = 0; i < dev->num_queues; ++i)
@@ -492,6 +895,8 @@ static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
fd->mark = UINT32_MAX;
fd->jump_to_group = UINT32_MAX;
+ memset(fd->mtr_ids, 0xff, sizeof(uint32_t) * MAX_FLM_MTRS_SUPPORTED);
+
fd->l2_prot = -1;
fd->l3_prot = -1;
fd->l4_prot = -1;
@@ -587,9 +992,17 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->sw9 = fh->flm_data[0];
learn_record->prot = fh->flm_prot;
+ learn_record->mbr_idx1 = fh->flm_mtr_ids[0];
+ learn_record->mbr_idx2 = fh->flm_mtr_ids[1];
+ learn_record->mbr_idx3 = fh->flm_mtr_ids[2];
+ learn_record->mbr_idx4 = fh->flm_mtr_ids[3];
+
/* Last non-zero mtr is used for statistics */
uint8_t mbrs = 0;
+ while (mbrs < MAX_FLM_MTRS_SUPPORTED && fh->flm_mtr_ids[mbrs] != 0)
+ ++mbrs;
+
learn_record->vol_idx = mbrs;
learn_record->nat_ip = fh->flm_nat_ipv4;
@@ -628,6 +1041,8 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
uint32_t *num_dest_port,
uint32_t *num_queues)
{
+ int mtr_count = 0;
+
unsigned int encap_decap_order = 0;
uint64_t modify_field_use_flags = 0x0;
@@ -813,6 +1228,29 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
break;
+ case RTE_FLOW_ACTION_TYPE_METER:
+ NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_METER", dev);
+
+ if (action[aidx].conf) {
+ struct rte_flow_action_meter meter_tmp;
+ const struct rte_flow_action_meter *meter =
+ memcpy_mask_if(&meter_tmp, action[aidx].conf,
+ action_mask ? action_mask[aidx].conf : NULL,
+ sizeof(struct rte_flow_action_meter));
+
+ if (mtr_count >= MAX_FLM_MTRS_SUPPORTED) {
+ NT_LOG(ERR, FILTER,
+ "ERROR: - Number of METER actions exceeds %d.",
+ MAX_FLM_MTRS_SUPPORTED);
+ flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error);
+ return -1;
+ }
+
+ fd->mtr_ids[mtr_count++] = meter->mtr_id;
+ }
+
+ break;
+
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev);
@@ -2529,6 +2967,13 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
const uint32_t *packet_data, uint32_t flm_key_id, uint32_t flm_ft,
uint16_t rpl_ext_ptr, uint32_t flm_scrub __rte_unused, uint32_t priority)
{
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = fh->dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat = handle->port_stats[fh->caller_id]->stats;
+ fh->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
switch (fd->l4_prot) {
case PROT_L4_TCP:
fh->flm_prot = 6;
@@ -3594,6 +4039,29 @@ int initialize_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
if (ndev->id_table_handle == NULL)
goto err_exit0;
+ ndev->flm_mtr_handle = calloc(1, sizeof(struct flm_flow_mtr_handle_s));
+ struct flm_mtr_shared_stats_s *flm_shared_stats =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *flm_stats =
+ calloc(FLM_MTR_STAT_SIZE, sizeof(struct flm_mtr_stat_s));
+
+ if (ndev->flm_mtr_handle == NULL || flm_shared_stats == NULL ||
+ flm_stats == NULL) {
+ free(ndev->flm_mtr_handle);
+ free(flm_shared_stats);
+ free(flm_stats);
+ goto err_exit0;
+ }
+
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ ((struct flm_flow_mtr_handle_s *)ndev->flm_mtr_handle)->port_stats[i] =
+ flm_shared_stats;
+ }
+
+ flm_shared_stats->stats = flm_stats;
+ flm_shared_stats->size = FLM_MTR_STAT_SIZE;
+ flm_shared_stats->shared = UINT8_MAX;
+
if (flow_group_handle_create(&ndev->group_handle, ndev->be.flm.nb_categories))
goto err_exit0;
@@ -3628,6 +4096,18 @@ int done_flow_management_of_ndev_profile_inline(struct flow_nic_dev *ndev)
flow_nic_free_resource(ndev, RES_FLM_FLOW_TYPE, 1);
flow_nic_free_resource(ndev, RES_FLM_RCP, 0);
+ for (uint32_t i = 0; i < UINT8_MAX; ++i) {
+ struct flm_flow_mtr_handle_s *handle = ndev->flm_mtr_handle;
+ handle->port_stats[i]->shared -= 1;
+
+ if (handle->port_stats[i]->shared == 0) {
+ free(handle->port_stats[i]->stats);
+ free(handle->port_stats[i]);
+ }
+ }
+
+ free(ndev->flm_mtr_handle);
+
flow_group_handle_destroy(&ndev->group_handle);
ntnic_id_table_destroy(ndev->id_table_handle);
@@ -4751,6 +5231,11 @@ int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
port_info->max_nb_aging_objects = dev->nb_aging_objects;
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle)
+ port_info->max_nb_meters = mtr_handle->port_stats[caller_id]->size;
+
return res;
}
@@ -4782,6 +5267,35 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
dev->nb_aging_objects = port_attr->nb_aging_objects;
}
+ if (port_attr->nb_meters > 0) {
+ struct flm_flow_mtr_handle_s *mtr_handle = dev->ndev->flm_mtr_handle;
+
+ if (mtr_handle->port_stats[caller_id]->shared == 1) {
+ res = realloc(mtr_handle->port_stats[caller_id]->stats,
+ port_attr->nb_meters) == NULL
+ ? -1
+ : 0;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+
+ } else {
+ mtr_handle->port_stats[caller_id] =
+ calloc(1, sizeof(struct flm_mtr_shared_stats_s));
+ struct flm_mtr_stat_s *stats =
+ calloc(port_attr->nb_meters, sizeof(struct flm_mtr_stat_s));
+
+ if (mtr_handle->port_stats[caller_id] == NULL || stats == NULL) {
+ free(mtr_handle->port_stats[caller_id]);
+ free(stats);
+ error->message = "Failed to allocate meter actions";
+ goto error_out;
+ }
+
+ mtr_handle->port_stats[caller_id]->stats = stats;
+ mtr_handle->port_stats[caller_id]->size = port_attr->nb_meters;
+ mtr_handle->port_stats[caller_id]->shared = 1;
+ }
+ }
+
return res;
error_out:
@@ -4821,8 +5335,18 @@ static const struct profile_inline_ops ops = {
/*
* NT Flow FLM Meter API
*/
+ .flow_mtr_supported = flow_mtr_supported,
+ .flow_mtr_meter_policy_n_max = flow_mtr_meter_policy_n_max,
+ .flow_mtr_set_profile = flow_mtr_set_profile,
+ .flow_mtr_set_policy = flow_mtr_set_policy,
+ .flow_mtr_create_meter = flow_mtr_create_meter,
+ .flow_mtr_probe_meter = flow_mtr_probe_meter,
+ .flow_mtr_destroy_meter = flow_mtr_destroy_meter,
+ .flm_mtr_adjust_stats = flm_mtr_adjust_stats,
+ .flow_mtr_meters_supported = flow_mtr_meters_supported,
.flm_setup_queues = flm_setup_queues,
.flm_free_queues = flm_free_queues,
+ .flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
};
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 15da911ca7..1e9dcd549f 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -308,6 +308,33 @@ struct profile_inline_ops {
*/
void (*flm_setup_queues)(void);
void (*flm_free_queues)(void);
+
+ /*
+ * NT Flow FLM Meter API
+ */
+ int (*flow_mtr_supported)(struct flow_eth_dev *dev);
+ uint64_t (*flow_mtr_meter_policy_n_max)(void);
+ int (*flow_mtr_set_profile)(struct flow_eth_dev *dev, uint32_t profile_id,
+ uint64_t bucket_rate_a, uint64_t bucket_size_a,
+ uint64_t bucket_rate_b, uint64_t bucket_size_b);
+ int (*flow_mtr_set_policy)(struct flow_eth_dev *dev, uint32_t policy_id, int drop);
+ int (*flow_mtr_create_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t profile_id, uint32_t policy_id, uint64_t stats_mask);
+ int (*flow_mtr_probe_meter)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id);
+ int (*flow_mtr_destroy_meter)(struct flow_eth_dev *dev, uint8_t caller_id,
+ uint32_t mtr_id);
+ int (*flm_mtr_adjust_stats)(struct flow_eth_dev *dev, uint8_t caller_id, uint32_t mtr_id,
+ uint32_t adjust_value);
+ uint32_t (*flow_mtr_meters_supported)(struct flow_eth_dev *dev, uint8_t caller_id);
+
+ void (*flm_mtr_read_stats)(struct flow_eth_dev *dev,
+ uint8_t caller_id,
+ uint32_t id,
+ uint64_t *stats_mask,
+ uint64_t *green_pkt,
+ uint64_t *green_bytes,
+ int clear);
+
uint32_t (*flm_update)(struct flow_eth_dev *dev);
int (*flow_info_get_profile_inline)(struct flow_eth_dev *dev, uint8_t caller_id,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 70/80] net/ntnic: add meter module
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (68 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 69/80] net/ntnic: add meter support Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 71/80] net/ntnic: add action update support Serhii Iliushyk
` (9 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Meter module was added:
1. add/remove profile
2. create/destroy flow
3. add/remove meter policy
4. read/update stats
eth_dev_ops struct was extended with ops above.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/features/ntnic.ini | 1 +
drivers/net/ntnic/include/ntos_drv.h | 14 +
drivers/net/ntnic/meson.build | 2 +
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 483 ++++++++++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 11 +-
drivers/net/ntnic/ntnic_mod_reg.c | 21 +
drivers/net/ntnic/ntnic_mod_reg.h | 12 +
7 files changed, 543 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index e2de6d15f6..884365f1a0 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -39,6 +39,7 @@ jump = Y
mark = Y
meter = Y
modify_field = Y
+passthru = Y
port_id = Y
queue = Y
raw_decap = Y
diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h
index 7b3c8ff3d6..f6ce442d17 100644
--- a/drivers/net/ntnic/include/ntos_drv.h
+++ b/drivers/net/ntnic/include/ntos_drv.h
@@ -12,6 +12,7 @@
#include <inttypes.h>
#include <rte_ether.h>
+#include "rte_mtr.h"
#include "stream_binary_flow_api.h"
#include "nthw_drv.h"
@@ -90,6 +91,19 @@ struct __rte_cache_aligned ntnic_tx_queue {
enum fpga_info_profile profile; /* Inline / Capture */
};
+struct nt_mtr_profile {
+ LIST_ENTRY(nt_mtr_profile) next;
+ uint32_t profile_id;
+ struct rte_mtr_meter_profile profile;
+};
+
+struct nt_mtr {
+ LIST_ENTRY(nt_mtr) next;
+ uint32_t mtr_id;
+ int shared;
+ struct nt_mtr_profile *profile;
+};
+
struct pmd_internals {
const struct rte_pci_device *pci_dev;
struct flow_eth_dev *flw_dev;
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index 8c6d02a5ec..ca46541ef3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -17,6 +17,7 @@ includes = [
include_directories('nthw'),
include_directories('nthw/supported'),
include_directories('nthw/model'),
+ include_directories('nthw/ntnic_meter'),
include_directories('nthw/flow_filter'),
include_directories('nthw/flow_api'),
include_directories('nim/'),
@@ -92,6 +93,7 @@ sources = files(
'nthw/flow_filter/flow_nthw_tx_cpy.c',
'nthw/flow_filter/flow_nthw_tx_ins.c',
'nthw/flow_filter/flow_nthw_tx_rpl.c',
+ 'nthw/ntnic_meter/ntnic_meter.c',
'nthw/model/nthw_fpga_model.c',
'nthw/nthw_platform.c',
'nthw/nthw_rac.c',
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
new file mode 100644
index 0000000000..e4e8fe0c7d
--- /dev/null
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -0,0 +1,483 @@
+/*
+ * SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Napatech A/S
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_meter.h>
+#include <rte_mtr.h>
+#include <rte_mtr_driver.h>
+#include <rte_malloc.h>
+
+#include "ntos_drv.h"
+#include "ntlog.h"
+#include "nt_util.h"
+#include "ntos_system.h"
+#include "ntnic_mod_reg.h"
+
+static inline uint8_t get_caller_id(uint16_t port)
+{
+ return MAX_VDPA_PORTS + (uint8_t)(port & 0x7f) + 1;
+}
+
+struct qos_integer_fractional {
+ uint32_t integer;
+ uint32_t fractional; /* 1/1024 */
+};
+
+/*
+ * Inline FLM metering
+ */
+
+static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
+ struct rte_mtr_capabilities *cap,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (!profile_inline_ops->flow_mtr_supported(internals->flw_dev)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Ethernet device does not support metering");
+ }
+
+ memset(cap, 0x0, sizeof(struct rte_mtr_capabilities));
+
+ /* MBR records use 28-bit integers */
+ cap->n_max = profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id);
+ cap->n_shared_max = cap->n_max;
+
+ cap->identical = 0;
+ cap->shared_identical = 0;
+
+ cap->shared_n_flows_per_mtr_max = UINT32_MAX;
+
+ /* Limited by number of MBR record ids per FLM learn record */
+ cap->chaining_n_mtrs_per_flow_max = 4;
+
+ cap->chaining_use_prev_mtr_color_supported = 0;
+ cap->chaining_use_prev_mtr_color_enforced = 0;
+
+ cap->meter_rate_max = (uint64_t)(0xfff << 0xf) * 1099;
+
+ cap->stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ /* Only color-blind mode is supported */
+ cap->color_aware_srtcm_rfc2697_supported = 0;
+ cap->color_aware_trtcm_rfc2698_supported = 0;
+ cap->color_aware_trtcm_rfc4115_supported = 0;
+
+ /* Focused on RFC2698 for now */
+ cap->meter_srtcm_rfc2697_n_max = 0;
+ cap->meter_trtcm_rfc2698_n_max = cap->n_max;
+ cap->meter_trtcm_rfc4115_n_max = 0;
+
+ cap->meter_policy_n_max = profile_inline_ops->flow_mtr_meter_policy_n_max();
+
+ /* Byte mode is supported */
+ cap->srtcm_rfc2697_byte_mode_supported = 0;
+ cap->trtcm_rfc2698_byte_mode_supported = 1;
+ cap->trtcm_rfc4115_byte_mode_supported = 0;
+
+ /* Packet mode not supported */
+ cap->srtcm_rfc2697_packet_mode_supported = 0;
+ cap->trtcm_rfc2698_packet_mode_supported = 0;
+ cap->trtcm_rfc4115_packet_mode_supported = 0;
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (profile->packet_mode != 0) {
+ return -rte_mtr_error_set(error, EINVAL,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_PACKET_MODE, NULL,
+ "Profile packet mode not supported");
+ }
+
+ if (profile->alg == RTE_MTR_SRTCM_RFC2697) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 2697 not supported");
+ }
+
+ if (profile->alg == RTE_MTR_TRTCM_RFC4115) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "RFC 4115 not supported");
+ }
+
+ if (profile->trtcm_rfc2698.cir != profile->trtcm_rfc2698.pir ||
+ profile->trtcm_rfc2698.cbs != profile->trtcm_rfc2698.pbs) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile committed and peak rates must be equal");
+ }
+
+ int res = profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id,
+ profile->trtcm_rfc2698.cir,
+ profile->trtcm_rfc2698.cbs, 0, 0);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL,
+ "Profile could not be added.");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ profile_inline_ops->flow_mtr_set_profile(internals->flw_dev, meter_profile_id, 0, 0, 0, 0);
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
+ uint32_t policy_id,
+ struct rte_mtr_meter_policy_params *policy,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ const struct rte_flow_action *actions = policy->actions[RTE_COLOR_GREEN];
+ int green_action_supported = (actions[0].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_VOID &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END) ||
+ (actions[0].type == RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END);
+
+ actions = policy->actions[RTE_COLOR_YELLOW];
+ int yellow_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ actions = policy->actions[RTE_COLOR_RED];
+ int red_action_supported = actions[0].type == RTE_FLOW_ACTION_TYPE_DROP &&
+ actions[1].type == RTE_FLOW_ACTION_TYPE_END;
+
+ if (green_action_supported == 0 || yellow_action_supported == 0 ||
+ red_action_supported == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Unsupported meter policy actions");
+ }
+
+ if (profile_inline_ops->flow_mtr_set_policy(internals->flw_dev, policy_id, 1)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL,
+ "Policy could not be added");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_meter_policy_delete_inline(struct rte_eth_dev *eth_dev __rte_unused,
+ uint32_t policy_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ return 0;
+}
+
+static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (params->use_prev_mtr_color != 0 || params->dscp_table != NULL) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only color blind mode is supported");
+ }
+
+ uint64_t allowed_stats_mask = RTE_MTR_STATS_N_PKTS_GREEN | RTE_MTR_STATS_N_BYTES_GREEN;
+
+ if ((params->stats_mask & ~allowed_stats_mask) != 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Requested color stats not supported");
+ }
+
+ if (params->meter_enable == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Disabled meters not supported");
+ }
+
+ if (shared == 0) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Only shared mtrs are supported");
+ }
+
+ if (params->meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
+ "Profile id out of range");
+
+ if (params->meter_policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
+ "Policy id out of range");
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ int res = profile_inline_ops->flow_mtr_create_meter(internals->flw_dev,
+ caller_id,
+ mtr_id,
+ params->meter_profile_id,
+ params->meter_policy_id,
+ params->stats_mask);
+
+ if (res) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error __rte_unused)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_destroy_meter(internals->flw_dev, caller_id, mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to offload to hardware");
+ }
+
+ return 0;
+}
+
+static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ uint64_t adjust_value,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ const uint64_t adjust_bit = 1ULL << 63;
+ const uint64_t probe_bit = 1ULL << 62;
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ if (adjust_value & adjust_bit) {
+ adjust_value &= adjust_bit - 1;
+
+ if (adjust_value > (uint64_t)UINT32_MAX) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "Adjust value is out of range");
+ }
+
+ if (profile_inline_ops->flm_mtr_adjust_stats(internals->flw_dev, caller_id, mtr_id,
+ (uint32_t)adjust_value)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to adjust offloaded MTR");
+ }
+
+ return 0;
+ }
+
+ if (adjust_value & probe_bit) {
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev,
+ caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ NULL, "MTR id is out of range");
+ }
+
+ if (profile_inline_ops->flow_mtr_probe_meter(internals->flw_dev, caller_id,
+ mtr_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to offload to hardware");
+ }
+
+ return 0;
+ }
+
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "Use of meter stats update requires bit 63 or bit 62 of \"stats_mask\" must be 1.");
+}
+
+static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG(ERR, NTHW, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
+
+ if (mtr_id >=
+ profile_inline_ops->flow_mtr_meters_supported(internals->flw_dev, caller_id)) {
+ return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL,
+ "MTR id is out of range");
+ }
+
+ memset(stats, 0x0, sizeof(struct rte_mtr_stats));
+ profile_inline_ops->flm_mtr_read_stats(internals->flw_dev, caller_id, mtr_id, stats_mask,
+ &stats->n_pkts[RTE_COLOR_GREEN],
+ &stats->n_bytes[RTE_COLOR_GREEN], clear);
+
+ return 0;
+}
+
+/*
+ * Ops setup
+ */
+
+static const struct rte_mtr_ops mtr_ops_inline = {
+ .capabilities_get = eth_mtr_capabilities_get_inline,
+ .meter_profile_add = eth_mtr_meter_profile_add_inline,
+ .meter_profile_delete = eth_mtr_meter_profile_delete_inline,
+ .create = eth_mtr_create_inline,
+ .destroy = eth_mtr_destroy_inline,
+ .meter_policy_add = eth_mtr_meter_policy_add_inline,
+ .meter_policy_delete = eth_mtr_meter_policy_delete_inline,
+ .stats_update = eth_mtr_stats_adjust_inline,
+ .stats_read = eth_mtr_stats_read_inline,
+};
+
+static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
+{
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
+ enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
+
+ switch (profile) {
+ case FPGA_INFO_PROFILE_INLINE:
+ *(const struct rte_mtr_ops **)ops = &mtr_ops_inline;
+ break;
+
+ case FPGA_INFO_PROFILE_UNKNOWN:
+
+ /* fallthrough */
+ case FPGA_INFO_PROFILE_CAPTURE:
+
+ /* fallthrough */
+ default:
+ NT_LOG(ERR, NTHW, "" PCIIDENT_PRINT_STR ": fpga profile not supported",
+ PCIIDENT_TO_DOMAIN(p_nt_drv->pciident),
+ PCIIDENT_TO_BUSNR(p_nt_drv->pciident),
+ PCIIDENT_TO_DEVNR(p_nt_drv->pciident),
+ PCIIDENT_TO_FUNCNR(p_nt_drv->pciident));
+ return -1;
+ }
+
+ return 0;
+}
+
+static struct meter_ops_s meter_ops = {
+ .eth_mtr_ops_get = eth_mtr_ops_get,
+};
+
+void meter_init(void)
+{
+ NT_LOG(DBG, NTNIC, "Meter ops initialized");
+ register_meter_ops(&meter_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index cdf5c346b7..df9ee77e06 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1682,7 +1682,7 @@ static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_con
return 0;
}
-static const struct eth_dev_ops nthw_eth_dev_ops = {
+static struct eth_dev_ops nthw_eth_dev_ops = {
.dev_configure = eth_dev_configure,
.dev_start = eth_dev_start,
.dev_stop = eth_dev_stop,
@@ -1705,6 +1705,7 @@ static const struct eth_dev_ops nthw_eth_dev_ops = {
.mac_addr_add = eth_mac_addr_add,
.mac_addr_set = eth_mac_addr_set,
.set_mc_addr_list = eth_set_mc_addr_list,
+ .mtr_ops_get = NULL,
.flow_ops_get = dev_flow_ops_get,
.xstats_get = eth_xstats_get,
.xstats_get_names = eth_xstats_get_names,
@@ -2168,6 +2169,14 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
return -1;
}
+ const struct meter_ops_s *meter_ops = get_meter_ops();
+
+ if (meter_ops != NULL)
+ nthw_eth_dev_ops.mtr_ops_get = meter_ops->eth_mtr_ops_get;
+
+ else
+ NT_LOG(DBG, NTNIC, "Meter module is not initialized");
+
/* Initialize the queue system */
if (err == 0) {
sg_ops = get_sg_ops();
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 6737d18a6f..10aa778a57 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -19,6 +19,27 @@ const struct sg_ops_s *get_sg_ops(void)
return sg_ops;
}
+/*
+ *
+ */
+static struct meter_ops_s *meter_ops;
+
+void register_meter_ops(struct meter_ops_s *ops)
+{
+ meter_ops = ops;
+}
+
+const struct meter_ops_s *get_meter_ops(void)
+{
+ if (meter_ops == NULL)
+ meter_init();
+
+ return meter_ops;
+}
+
+/*
+ *
+ */
static const struct ntnic_filter_ops *ntnic_filter_ops;
void register_ntnic_filter_ops(const struct ntnic_filter_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 1e9dcd549f..3fbbee6490 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -115,6 +115,18 @@ void register_sg_ops(struct sg_ops_s *ops);
const struct sg_ops_s *get_sg_ops(void);
void sg_init(void);
+/* Meter ops section */
+struct meter_ops_s {
+ int (*eth_mtr_ops_get)(struct rte_eth_dev *eth_dev, void *ops);
+};
+
+void register_meter_ops(struct meter_ops_s *ops);
+const struct meter_ops_s *get_meter_ops(void);
+void meter_init(void);
+
+/*
+ *
+ */
struct ntnic_filter_ops {
int (*poll_statistics)(struct pmd_internals *internals);
};
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 71/80] net/ntnic: add action update support
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (69 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 70/80] net/ntnic: add meter module Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 72/80] net/ntnic: add flow action update Serhii Iliushyk
` (8 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
rte_flow_ops was extended with action update feature.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 66 +++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 10 +++
2 files changed, 76 insertions(+)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 6d65ffd38f..8edaccb65c 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -9,6 +9,7 @@
#include "ntnic_mod_reg.h"
#include "ntos_system.h"
#include "ntos_drv.h"
+#include "rte_flow.h"
#define MAX_RTE_FLOWS 8192
@@ -703,6 +704,70 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
return res;
}
+static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+ int res = -1;
+
+ if (internals->flw_dev) {
+ struct pmd_internals *dev_private =
+ (struct pmd_internals *)eth_dev->data->dev_private;
+ struct fpga_info_s *fpga_info = &dev_private->p_drv->ntdrv.adapter_info.fpga_info;
+ struct cnv_action_s action = { 0 };
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (dev_private->type == PORT_TYPE_OVERRIDE &&
+ dev_private->vpq_nb_vq > 0) {
+ /*
+ * The queues coming from the main PMD will always start from 0
+ * When the port is a the VF/vDPA port the queues must be changed
+ * to match the queues allocated for VF/vDPA.
+ */
+ queue_offset = dev_private->vpq[0].id;
+ }
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS,
+ queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return -1;
+ }
+ }
+
+ if (is_flow_handle_typecast(flow)) {
+ res = flow_filter_ops->flow_actions_update(internals->flw_dev,
+ (void *)flow,
+ action.flow_actions,
+ &flow_error);
+
+ } else {
+ res = flow_filter_ops->flow_actions_update(internals->flw_dev,
+ flow->flw_hdl,
+ action.flow_actions,
+ &flow_error);
+ }
+ }
+
+ convert_error(error, &flow_error);
+
+ return res;
+}
+
static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
struct rte_flow *flow,
FILE *file,
@@ -941,6 +1006,7 @@ static const struct rte_flow_ops dev_flow_ops = {
.create = eth_flow_create,
.destroy = eth_flow_destroy,
.flush = eth_flow_flush,
+ .actions_update = eth_flow_actions_update,
.dev_dump = eth_flow_dev_dump,
.get_aged_flows = eth_flow_get_aged_flows,
.info_get = eth_flow_info_get,
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 3fbbee6490..563e62ebce 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -292,6 +292,11 @@ struct profile_inline_ops {
uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_actions_update_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int (*flow_dev_dump_profile_inline)(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
@@ -401,6 +406,11 @@ struct flow_filter_ops {
int (*flow_flush)(struct flow_eth_dev *dev, uint16_t caller_id,
struct rte_flow_error *error);
+ int (*flow_actions_update)(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int (*flow_get_flm_stats)(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
/*
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 72/80] net/ntnic: add flow action update
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (70 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 71/80] net/ntnic: add action update support Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 73/80] net/ntnic: add flow actions update Serhii Iliushyk
` (7 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flow_filter_ops was extended with flow action update API.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 16 ++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 5 +++++
2 files changed, 21 insertions(+)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 76492902ad..1fcccd37fd 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -264,6 +264,21 @@ static int flow_flush(struct flow_eth_dev *dev, uint16_t caller_id, struct rte_f
return profile_inline_ops->flow_flush_profile_inline(dev, caller_id, error);
}
+static int flow_actions_update(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_actions_update_profile_inline(dev, flow, action, error);
+}
+
/*
* Device Management API
*/
@@ -1093,6 +1108,7 @@ static const struct flow_filter_ops ops = {
.flow_create = flow_create,
.flow_destroy = flow_destroy,
.flow_flush = flow_flush,
+ .flow_actions_update = flow_actions_update,
.flow_dev_dump = flow_dev_dump,
.flow_get_flm_stats = flow_get_flm_stats,
.flow_get_aged_flows = flow_get_aged_flows,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index ea1d9c31b2..8a03be1ab7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -42,6 +42,11 @@ int flow_flush_profile_inline(struct flow_eth_dev *dev,
uint16_t caller_id,
struct rte_flow_error *error);
+int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error);
+
int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
struct flow_handle *flow,
uint16_t caller_id,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 73/80] net/ntnic: add flow actions update
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (71 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 72/80] net/ntnic: add flow action update Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 74/80] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
` (6 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Flow action update was implemented.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
.../profile_inline/flow_api_profile_inline.c | 165 ++++++++++++++++++
3 files changed, 167 insertions(+)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index bf5743f196..afdaf22e0b 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -69,6 +69,7 @@ Features
- Flow statistics
- Flow aging support
- Flow metering, including meter policy API.
+- Flow update. Update of the action list for specific flow
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 76d5efc97c..735a295f6e 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -166,6 +166,7 @@ New Features
* Added statistics support
* Added age rte flow action support
* Added meter flow metering and flow policy support
+ * Added flow actions update support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 189bdf01d6..aae794864e 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -15,6 +15,7 @@
#include "flow_api_hw_db_inline.h"
#include "flow_api_profile_inline_config.h"
#include "flow_id_table.h"
+#include "rte_flow.h"
#include "stream_binary_flow_api.h"
#include "flow_api_profile_inline.h"
@@ -36,6 +37,7 @@
#define NT_FLM_UNHANDLED_FLOW_TYPE 1
#define NT_FLM_OP_UNLEARN 0
#define NT_FLM_OP_LEARN 1
+#define NT_FLM_OP_RELEARN 2
#define NT_FLM_VIOLATING_MBR_FLOW_TYPE 15
#define NT_VIOLATING_MBR_CFN 0
@@ -4381,6 +4383,168 @@ int flow_flush_profile_inline(struct flow_eth_dev *dev,
return err;
}
+int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
+ struct flow_handle *flow,
+ const struct rte_flow_action action[],
+ struct rte_flow_error *error)
+{
+ assert(dev);
+ assert(flow);
+
+ uint32_t num_dest_port = 0;
+ uint32_t num_queues = 0;
+
+ int group = (int)flow->flm_kid - 2;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (flow->type != FLOW_HANDLE_TYPE_FLM) {
+ NT_LOG(ERR, FILTER,
+ "Flow actions update not supported for group 0 or default flows");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ return -1;
+ }
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate nic_flow_def";
+ return -1;
+ }
+
+ fd->non_empty = 1;
+
+ int res =
+ interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
+
+ if (res) {
+ free(fd);
+ return -1;
+ }
+
+ pthread_mutex_lock(&dev->ndev->mtx);
+
+ /* Setup new actions */
+ uint32_t local_idx_counter = 0;
+ uint32_t local_idxs[RES_COUNT];
+ memset(local_idxs, 0x0, sizeof(uint32_t) * RES_COUNT);
+
+ struct hw_db_inline_qsl_data qsl_data;
+ setup_db_qsl_data(fd, &qsl_data, num_dest_port, num_queues);
+
+ struct hw_db_inline_hsh_data hsh_data;
+ setup_db_hsh_data(fd, &hsh_data);
+
+ {
+ uint32_t flm_ft = 0;
+ uint32_t flm_scrub = 0;
+
+ /* Setup FLM RCP */
+ const struct hw_db_inline_flm_rcp_data *flm_data =
+ hw_db_inline_find_data(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_FLM_RCP,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter);
+
+ if (flm_data == NULL) {
+ NT_LOG(ERR, FILTER, "Could not retrieve FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_INVALID_OR_UNSUPPORTED_ELEM, error);
+ goto error_out;
+ }
+
+ struct hw_db_flm_idx flm_idx =
+ hw_db_inline_flm_add(dev->ndev, dev->ndev->hw_db_handle, flm_data, group);
+
+ local_idxs[local_idx_counter++] = flm_idx.raw;
+
+ if (flm_idx.error) {
+ NT_LOG(ERR, FILTER, "Could not reference FLM RPC resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ if (setup_flow_flm_actions(dev, fd, &qsl_data, &hsh_data, group, local_idxs,
+ &local_idx_counter, &flow->flm_rpl_ext_ptr, &flm_ft,
+ &flm_scrub, error)) {
+ goto error_out;
+ }
+
+ /* Update flow_handle */
+ for (int i = 0; i < MAX_FLM_MTRS_SUPPORTED; ++i) {
+ struct flm_flow_mtr_handle_s *handle = dev->ndev->flm_mtr_handle;
+ struct flm_mtr_stat_s *mtr_stat =
+ handle->port_stats[flow->caller_id]->stats;
+ flow->flm_mtr_ids[i] =
+ fd->mtr_ids[i] == UINT32_MAX ? 0 : mtr_stat[fd->mtr_ids[i]].flm_id;
+ }
+
+ for (unsigned int i = 0; i < fd->modify_field_count; ++i) {
+ switch (fd->modify_field[i].select) {
+ case CPY_SELECT_DSCP_IPV4:
+
+ /* fallthrough */
+ case CPY_SELECT_DSCP_IPV6:
+ flow->flm_dscp = fd->modify_field[i].value8[0];
+ break;
+
+ case CPY_SELECT_RQI_QFI:
+ flow->flm_rqi = (fd->modify_field[i].value8[0] >> 6) & 0x1;
+ flow->flm_qfi = fd->modify_field[i].value8[0] & 0x3f;
+ break;
+
+ case CPY_SELECT_IPV4:
+ flow->flm_nat_ipv4 = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ case CPY_SELECT_PORT:
+ flow->flm_nat_port = ntohs(fd->modify_field[i].value16[0]);
+ break;
+
+ case CPY_SELECT_TEID:
+ flow->flm_teid = ntohl(fd->modify_field[i].value32[0]);
+ break;
+
+ default:
+ NT_LOG(DBG, FILTER, "Unknown modify field: %d",
+ fd->modify_field[i].select);
+ break;
+ }
+ }
+
+ flow->flm_ft = (uint8_t)flm_ft;
+ flow->flm_scrub_prof = (uint8_t)flm_scrub;
+ flow->context = fd->age.context;
+
+ /* Program flow */
+ flm_flow_programming(flow, NT_FLM_OP_RELEARN);
+
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)flow->flm_db_idxs,
+ flow->flm_db_idx_counter);
+ memset(flow->flm_db_idxs, 0x0, sizeof(struct hw_db_idx) * RES_COUNT);
+
+ flow->flm_db_idx_counter = local_idx_counter;
+
+ for (int i = 0; i < RES_COUNT; ++i)
+ flow->flm_db_idxs[i] = local_idxs[i];
+ }
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ free(fd);
+ return 0;
+
+error_out:
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle, (struct hw_db_idx *)local_idxs,
+ local_idx_counter);
+
+ pthread_mutex_unlock(&dev->ndev->mtx);
+
+ free(fd);
+ return -1;
+}
+
static __rte_always_inline bool all_bits_enabled(uint64_t hash_mask, uint64_t hash_bits)
{
return (hash_mask & hash_bits) == hash_bits;
@@ -5324,6 +5488,7 @@ static const struct profile_inline_ops ops = {
.flow_create_profile_inline = flow_create_profile_inline,
.flow_destroy_profile_inline = flow_destroy_profile_inline,
.flow_flush_profile_inline = flow_flush_profile_inline,
+ .flow_actions_update_profile_inline = flow_actions_update_profile_inline,
.flow_nic_set_hasher_fields_inline = flow_nic_set_hasher_fields_inline,
.flow_get_aged_flows_profile_inline = flow_get_aged_flows_profile_inline,
/*
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 74/80] net/ntnic: migrate to the RTE spinlock
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (72 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 73/80] net/ntnic: add flow actions update Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 75/80] net/ntnic: remove unnecessary Serhii Iliushyk
` (5 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Migarte form the pthread to rte_spinlock
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api.h | 6 +-
drivers/net/ntnic/include/ntdrv_4ga.h | 3 +-
.../net/ntnic/nthw/core/include/nthw_i2cm.h | 4 +-
.../net/ntnic/nthw/core/include/nthw_rpf.h | 5 +-
drivers/net/ntnic/nthw/core/nthw_rpf.c | 3 +-
drivers/net/ntnic/nthw/flow_api/flow_api.c | 43 +++++-----
.../net/ntnic/nthw/flow_api/flow_id_table.c | 20 +++--
.../profile_inline/flow_api_profile_inline.c | 80 +++++++++++--------
.../ntnic/nthw/flow_filter/flow_nthw_flm.c | 47 +++++++++--
drivers/net/ntnic/nthw/nthw_rac.c | 38 ++-------
drivers/net/ntnic/nthw/nthw_rac.h | 2 +-
drivers/net/ntnic/ntnic_ethdev.c | 31 +++----
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 6 +-
13 files changed, 155 insertions(+), 133 deletions(-)
diff --git a/drivers/net/ntnic/include/flow_api.h b/drivers/net/ntnic/include/flow_api.h
index 032063712a..d5382669da 100644
--- a/drivers/net/ntnic/include/flow_api.h
+++ b/drivers/net/ntnic/include/flow_api.h
@@ -6,7 +6,7 @@
#ifndef _FLOW_API_H_
#define _FLOW_API_H_
-#include <pthread.h>
+#include <rte_spinlock.h>
#include "ntlog.h"
@@ -110,13 +110,13 @@ struct flow_nic_dev {
struct flow_handle *flow_base;
/* linked list of all FLM flows created on this NIC */
struct flow_handle *flow_base_flm;
- pthread_mutex_t flow_mtx;
+ rte_spinlock_t flow_mtx;
/* NIC backend API */
struct flow_api_backend_s be;
/* linked list of created eth-port devices on this NIC */
struct flow_eth_dev *eth_base;
- pthread_mutex_t mtx;
+ rte_spinlock_t mtx;
/* RSS hashing configuration */
struct nt_eth_rss_conf rss_conf;
diff --git a/drivers/net/ntnic/include/ntdrv_4ga.h b/drivers/net/ntnic/include/ntdrv_4ga.h
index 677aa7b6c8..78cf10368a 100644
--- a/drivers/net/ntnic/include/ntdrv_4ga.h
+++ b/drivers/net/ntnic/include/ntdrv_4ga.h
@@ -7,6 +7,7 @@
#define __NTDRV_4GA_H__
#include "nt4ga_adapter.h"
+#include <rte_spinlock.h>
typedef struct ntdrv_4ga_s {
uint32_t pciident;
@@ -15,7 +16,7 @@ typedef struct ntdrv_4ga_s {
volatile bool b_shutdown;
rte_thread_t flm_thread;
- pthread_mutex_t stat_lck;
+ rte_spinlock_t stat_lck;
rte_thread_t stat_thread;
rte_thread_t port_event_thread;
} ntdrv_4ga_t;
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h b/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
index 6e0ec4cf5e..eeb4dffe25 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_i2cm.h
@@ -7,7 +7,7 @@
#define __NTHW_II2CM_H__
#include "nthw_fpga_model.h"
-#include "pthread.h"
+#include "rte_spinlock.h"
struct nt_i2cm {
nthw_fpga_t *mp_fpga;
@@ -39,7 +39,7 @@ struct nt_i2cm {
nthw_field_t *mp_fld_io_exp_rst;
nthw_field_t *mp_fld_io_exp_int_b;
- pthread_mutex_t i2cmmutex;
+ rte_spinlock_t i2cmmutex;
};
typedef struct nt_i2cm nthw_i2cm_t;
diff --git a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
index 4c6c57ba55..00b322b2ea 100644
--- a/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
+++ b/drivers/net/ntnic/nthw/core/include/nthw_rpf.h
@@ -7,7 +7,8 @@
#define NTHW_RPF_HPP_
#include "nthw_fpga_model.h"
-#include "pthread.h"
+#include "rte_spinlock.h"
+#include <rte_spinlock.h>
struct nthw_rpf {
nthw_fpga_t *mp_fpga;
@@ -28,7 +29,7 @@ struct nthw_rpf {
int m_default_maturing_delay;
bool m_administrative_block; /* used to enforce license expiry */
- pthread_mutex_t rpf_mutex;
+ rte_spinlock_t rpf_mutex;
};
typedef struct nthw_rpf nthw_rpf_t;
diff --git a/drivers/net/ntnic/nthw/core/nthw_rpf.c b/drivers/net/ntnic/nthw/core/nthw_rpf.c
index 81c704d01a..1ed4d7b4e0 100644
--- a/drivers/net/ntnic/nthw/core/nthw_rpf.c
+++ b/drivers/net/ntnic/nthw/core/nthw_rpf.c
@@ -8,6 +8,7 @@
#include "nthw_drv.h"
#include "nthw_register.h"
#include "nthw_rpf.h"
+#include "rte_spinlock.h"
nthw_rpf_t *nthw_rpf_new(void)
{
@@ -65,7 +66,7 @@ int nthw_rpf_init(nthw_rpf_t *p, nthw_fpga_t *p_fpga, int n_instance)
nthw_fpga_get_product_param(p_fpga, NT_RPF_MATURING_DEL_DEFAULT, 0);
/* Initialize mutex */
- pthread_mutex_init(&p->rpf_mutex, NULL);
+ rte_spinlock_init(&p->rpf_mutex);
return 0;
}
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 1fcccd37fd..337902f654 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -2,6 +2,7 @@
* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2023 Napatech A/S
*/
+#include "rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -42,7 +43,7 @@ const char *dbg_res_descr[] = {
};
static struct flow_nic_dev *dev_base;
-static pthread_mutex_t base_mtx = PTHREAD_MUTEX_INITIALIZER;
+static rte_spinlock_t base_mtx = RTE_SPINLOCK_INITIALIZER;
/*
* Error handling
@@ -398,7 +399,7 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
#endif
/* delete all created flows from this device */
- pthread_mutex_lock(&ndev->mtx);
+ rte_spinlock_lock(&ndev->mtx);
struct flow_handle *flow = ndev->flow_base;
@@ -442,7 +443,7 @@ int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
if (nic_remove_eth_port_dev(ndev, eth_dev) != 0)
NT_LOG(ERR, FILTER, "ERROR : eth_dev %p not found", eth_dev);
- pthread_mutex_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&ndev->mtx);
/* free eth_dev */
free(eth_dev);
@@ -483,15 +484,15 @@ static void done_resource_elements(struct flow_nic_dev *ndev, enum res_type_e re
static void list_insert_flow_nic(struct flow_nic_dev *ndev)
{
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
ndev->next = dev_base;
dev_base = ndev;
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
}
static int list_remove_flow_nic(struct flow_nic_dev *ndev)
{
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
struct flow_nic_dev *nic_dev = dev_base, *prev = NULL;
while (nic_dev) {
@@ -502,7 +503,7 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
else
dev_base = nic_dev->next;
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return 0;
}
@@ -510,7 +511,7 @@ static int list_remove_flow_nic(struct flow_nic_dev *ndev)
nic_dev = nic_dev->next;
}
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return -1;
}
@@ -542,27 +543,27 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
"ERROR: Internal array for multiple queues too small for API");
}
- pthread_mutex_lock(&base_mtx);
+ rte_spinlock_lock(&base_mtx);
struct flow_nic_dev *ndev = get_nic_dev_from_adapter_no(adapter_no);
if (!ndev) {
/* Error - no flow api found on specified adapter */
NT_LOG(ERR, FILTER, "ERROR: no flow interface registered for adapter %d",
adapter_no);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
if (ndev->ports < ((uint16_t)port_no + 1)) {
NT_LOG(ERR, FILTER, "ERROR: port exceeds supported port range for adapter");
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
if ((alloc_rx_queues - 1) > FLOW_MAX_QUEUES) { /* 0th is exception so +1 */
NT_LOG(ERR, FILTER,
"ERROR: Exceeds supported number of rx queues per eth device");
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&base_mtx);
return NULL;
}
@@ -572,20 +573,19 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
if (eth_dev) {
NT_LOG(DBG, FILTER, "Re-opening existing NIC port device: NIC DEV: %i Port %i",
adapter_no, port_no);
- pthread_mutex_unlock(&base_mtx);
flow_delete_eth_dev(eth_dev);
eth_dev = NULL;
}
+ rte_spinlock_lock(&ndev->mtx);
+
eth_dev = calloc(1, sizeof(struct flow_eth_dev));
if (!eth_dev) {
NT_LOG(ERR, FILTER, "ERROR: calloc failed");
- goto err_exit1;
+ goto err_exit0;
}
- pthread_mutex_lock(&ndev->mtx);
-
eth_dev->ndev = ndev;
eth_dev->port = port_no;
eth_dev->port_id = port_id;
@@ -650,15 +650,14 @@ static struct flow_eth_dev *flow_get_eth_dev(uint8_t adapter_no, uint8_t port_no
nic_insert_eth_port_dev(ndev, eth_dev);
- pthread_mutex_unlock(&ndev->mtx);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&base_mtx);
return eth_dev;
err_exit0:
- pthread_mutex_unlock(&ndev->mtx);
- pthread_mutex_unlock(&base_mtx);
+ rte_spinlock_unlock(&ndev->mtx);
+ rte_spinlock_unlock(&base_mtx);
-err_exit1:
if (eth_dev)
free(eth_dev);
@@ -765,7 +764,7 @@ struct flow_nic_dev *flow_api_create(uint8_t adapter_no, const struct flow_api_b
for (int i = 0; i < RES_COUNT; i++)
assert(ndev->res[i].alloc_bm);
- pthread_mutex_init(&ndev->mtx, NULL);
+ rte_spinlock_init(&ndev->mtx);
list_insert_flow_nic(ndev);
return ndev;
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
index a3f5e1d7f7..a63f5542d1 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_id_table.c
@@ -3,12 +3,12 @@
* Copyright(c) 2024 Napatech A/S
*/
-#include <pthread.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include "flow_id_table.h"
+#include "rte_spinlock.h"
#define NTNIC_ARRAY_BITS 14
#define NTNIC_ARRAY_SIZE (1 << NTNIC_ARRAY_BITS)
@@ -25,7 +25,7 @@ struct ntnic_id_table_element {
struct ntnic_id_table_data {
struct ntnic_id_table_element *arrays[NTNIC_ARRAY_SIZE];
- pthread_mutex_t mtx;
+ rte_spinlock_t mtx;
uint32_t next_id;
@@ -68,7 +68,7 @@ void *ntnic_id_table_create(void)
{
struct ntnic_id_table_data *handle = calloc(1, sizeof(struct ntnic_id_table_data));
- pthread_mutex_init(&handle->mtx, NULL);
+ rte_spinlock_init(&handle->mtx);
handle->next_id = 1;
return handle;
@@ -81,8 +81,6 @@ void ntnic_id_table_destroy(void *id_table)
for (uint32_t i = 0; i < NTNIC_ARRAY_SIZE; ++i)
free(handle->arrays[i]);
- pthread_mutex_destroy(&handle->mtx);
-
free(id_table);
}
@@ -91,7 +89,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
uint32_t new_id = ntnic_id_table_array_pop_free_id(handle);
@@ -103,7 +101,7 @@ uint32_t ntnic_id_table_get_id(void *id_table, union flm_handles flm_h, uint8_t
element->type = type;
memcpy(&element->handle, &flm_h, sizeof(union flm_handles));
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
return new_id;
}
@@ -112,7 +110,7 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
struct ntnic_id_table_element *current_element =
ntnic_id_table_array_find_element(handle, id);
@@ -127,7 +125,7 @@ void ntnic_id_table_free_id(void *id_table, uint32_t id)
if (handle->free_tail == 0)
handle->free_tail = handle->free_head;
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
}
void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h, uint8_t *caller_id,
@@ -135,7 +133,7 @@ void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h,
{
struct ntnic_id_table_data *handle = id_table;
- pthread_mutex_lock(&handle->mtx);
+ rte_spinlock_lock(&handle->mtx);
struct ntnic_id_table_element *element = ntnic_id_table_array_find_element(handle, id);
@@ -143,5 +141,5 @@ void ntnic_id_table_find(void *id_table, uint32_t id, union flm_handles *flm_h,
*type = element->type;
memcpy(flm_h, &element->handle, sizeof(union flm_handles));
- pthread_mutex_unlock(&handle->mtx);
+ rte_spinlock_unlock(&handle->mtx);
}
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index aae794864e..f9133ad802 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include "generic/rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -20,6 +21,7 @@
#include "flow_api_profile_inline.h"
#include "ntnic_mod_reg.h"
+#include <rte_spinlock.h>
#include <rte_common.h>
#define FLM_MTR_PROFILE_SIZE 0x100000
@@ -189,7 +191,7 @@ static int flow_mtr_create_meter(struct flow_eth_dev *dev,
(void)policy_id;
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -238,7 +240,7 @@ static int flow_mtr_create_meter(struct flow_eth_dev *dev,
mtr_stat[mtr_id].flm_id = flm_id;
atomic_store(&mtr_stat[mtr_id].stats_mask, stats_mask);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -247,7 +249,7 @@ static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uin
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -278,7 +280,7 @@ static int flow_mtr_probe_meter(struct flow_eth_dev *dev, uint8_t caller_id, uin
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -287,7 +289,7 @@ static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, u
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -330,7 +332,7 @@ static int flow_mtr_destroy_meter(struct flow_eth_dev *dev, uint8_t caller_id, u
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -340,7 +342,7 @@ static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uin
{
struct flm_v25_lrn_data_s *learn_record = NULL;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
learn_record =
(struct flm_v25_lrn_data_s *)
@@ -377,7 +379,7 @@ static int flm_mtr_adjust_stats(struct flow_eth_dev *dev, uint8_t caller_id, uin
flm_lrn_queue_release_write_buffer(flm_lrn_queue_arr);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
@@ -514,9 +516,9 @@ static void flm_mtr_read_sta_records(struct flow_eth_dev *dev, uint32_t *data, u
uint8_t port;
bool remote_caller = is_remote_caller(caller_id, &port);
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
((struct flow_handle *)flm_h.p)->learn_ignored = 1;
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
struct flm_status_event_s data = {
.flow = flm_h.p,
.learn_ignore = sta_data->lis,
@@ -813,7 +815,7 @@ static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t p
static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
{
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (ndev->flow_base)
ndev->flow_base->prev = fh;
@@ -822,7 +824,7 @@ static void nic_insert_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
fh->prev = NULL;
ndev->flow_base = fh;
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
@@ -830,7 +832,7 @@ static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
struct flow_handle *next = fh->next;
struct flow_handle *prev = fh->prev;
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (next && prev) {
prev->next = next;
@@ -847,12 +849,12 @@ static void nic_remove_flow(struct flow_nic_dev *ndev, struct flow_handle *fh)
ndev->flow_base = NULL;
}
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh)
{
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (ndev->flow_base_flm)
ndev->flow_base_flm->prev = fh;
@@ -861,7 +863,7 @@ static void nic_insert_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
fh->prev = NULL;
ndev->flow_base_flm = fh;
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *fh_flm)
@@ -869,7 +871,7 @@ static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
struct flow_handle *next = fh_flm->next;
struct flow_handle *prev = fh_flm->prev;
- pthread_mutex_lock(&ndev->flow_mtx);
+ rte_spinlock_lock(&ndev->flow_mtx);
if (next && prev) {
prev->next = next;
@@ -886,7 +888,7 @@ static void nic_remove_flow_flm(struct flow_nic_dev *ndev, struct flow_handle *f
ndev->flow_base_flm = NULL;
}
- pthread_mutex_unlock(&ndev->flow_mtx);
+ rte_spinlock_unlock(&ndev->flow_mtx);
}
static inline struct nic_flow_def *prepare_nic_flow_def(struct nic_flow_def *fd)
@@ -4188,20 +4190,20 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev __rte_un
struct nic_flow_def *fd = allocate_nic_flow_def();
if (fd == NULL)
- goto err_exit;
+ goto err_exit0;
res = interpret_flow_actions(dev, action, NULL, fd, error, &num_dest_port, &num_queues);
if (res)
- goto err_exit;
+ goto err_exit0;
res = interpret_flow_elements(dev, elem, fd, error, forced_vlan_vid_local, &port_id,
packet_data, packet_mask, &key_def);
if (res)
- goto err_exit;
+ goto err_exit0;
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
/* Translate group IDs */
if (fd->jump_to_group != UINT32_MAX &&
@@ -4235,19 +4237,27 @@ struct flow_handle *flow_create_profile_inline(struct flow_eth_dev *dev __rte_un
NT_LOG(DBG, FILTER, ">>>>> [Dev %p] Nic %i, Port %i: fh %p fd %p - implementation <<<<<",
dev, dev->ndev->adapter_no, dev->port, fh, fd);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return fh;
err_exit:
- if (fh)
+ if (fh) {
flow_destroy_locked_profile_inline(dev, fh, NULL);
-
- else
+ fh = NULL;
+ } else {
free(fd);
+ fd = NULL;
+ }
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
- pthread_mutex_unlock(&dev->ndev->mtx);
+err_exit0:
+ if (fd) {
+ free(fd);
+ fd = NULL;
+ }
NT_LOG(ERR, FILTER, "ERR: %s", __func__);
return NULL;
@@ -4308,6 +4318,7 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
(struct hw_db_idx *)fh->db_idxs, fh->db_idx_counter);
free(fh->fd);
+ fh->fd = NULL;
}
if (err) {
@@ -4316,6 +4327,7 @@ int flow_destroy_locked_profile_inline(struct flow_eth_dev *dev,
}
free(fh);
+ fh = NULL;
#ifdef FLOW_DEBUG
dev->ndev->be.iface->set_debug_mode(dev->ndev->be.be_dev, FLOW_BACKEND_DEBUG_MODE_NONE);
@@ -4333,9 +4345,9 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
if (flow) {
/* Delete this flow */
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
err = flow_destroy_locked_profile_inline(dev, flow, error);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
}
return err;
@@ -4423,7 +4435,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
return -1;
}
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
/* Setup new actions */
uint32_t local_idx_counter = 0;
@@ -4530,7 +4542,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
flow->flm_db_idxs[i] = local_idxs[i];
}
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
free(fd);
return 0;
@@ -4539,7 +4551,7 @@ int flow_actions_update_profile_inline(struct flow_eth_dev *dev,
hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle, (struct hw_db_idx *)local_idxs,
local_idx_counter);
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
free(fd);
return -1;
@@ -5276,7 +5288,7 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
{
flow_nic_set_error(ERR_SUCCESS, error);
- pthread_mutex_lock(&dev->ndev->mtx);
+ rte_spinlock_lock(&dev->ndev->mtx);
if (flow != NULL) {
if (flow->type == FLOW_HANDLE_TYPE_FLM) {
@@ -5335,7 +5347,7 @@ int flow_dev_dump_profile_inline(struct flow_eth_dev *dev,
}
}
- pthread_mutex_unlock(&dev->ndev->mtx);
+ rte_spinlock_unlock(&dev->ndev->mtx);
return 0;
}
diff --git a/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c b/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
index 6f3b381a17..8855978349 100644
--- a/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
+++ b/drivers/net/ntnic/nthw/flow_filter/flow_nthw_flm.c
@@ -678,11 +678,13 @@ int flm_nthw_buf_ctrl_update(const struct flm_nthw *p, uint32_t *lrn_free, uint3
uint32_t address_bufctrl = nthw_register_get_address(p->mp_buf_ctrl);
nthw_rab_bus_id_t bus_id = 1;
struct dma_buf_ptr bc_buf;
- ret = nthw_rac_rab_dma_begin(rac);
+ rte_spinlock_lock(&rac->m_mutex);
+ ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
if (ret == 0) {
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- ret = nthw_rac_rab_dma_commit(rac);
+ ret = rac->m_dma_active ? nthw_rac_rab_dma_commit(rac) : (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
if (ret != 0)
return ret;
@@ -692,6 +694,13 @@ int flm_nthw_buf_ctrl_update(const struct flm_nthw *p, uint32_t *lrn_free, uint3
*lrn_free = bc_buf.base[bc_index & bc_mask] & 0xffff;
*inf_avail = (bc_buf.base[bc_index & bc_mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(bc_index + 1) & bc_mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
return ret;
@@ -716,8 +725,10 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
*handled_records = 0;
int max_tries = 10000;
- while (*inf_avail == 0 && *sta_avail == 0 && records != 0 && --max_tries > 0)
- if (nthw_rac_rab_dma_begin(rac) == 0) {
+ while (*inf_avail == 0 && *sta_avail == 0 && records != 0 && --max_tries > 0) {
+ rte_spinlock_lock(&rac->m_mutex);
+ int ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
+ if (ret == 0) {
uint32_t dma_free = nthw_rac_rab_get_free(rac);
if (dma_free != RAB_DMA_BUF_CNT) {
@@ -770,7 +781,11 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
/* Read buf ctrl */
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- if (nthw_rac_rab_dma_commit(rac) != 0)
+ int ret = rac->m_dma_active ?
+ nthw_rac_rab_dma_commit(rac) :
+ (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
+ if (ret != 0)
return -1;
uint32_t bc_mask = bc_buf.size - 1;
@@ -778,8 +793,15 @@ int flm_nthw_lrn_data_flush(const struct flm_nthw *p, const uint32_t *data, uint
*lrn_free = bc_buf.base[bc_index & bc_mask] & 0xffff;
*inf_avail = (bc_buf.base[bc_index & bc_mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(bc_index + 1) & bc_mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
-
+ }
return 0;
}
@@ -801,7 +823,8 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
uint32_t mask;
uint32_t index;
- ret = nthw_rac_rab_dma_begin(rac);
+ rte_spinlock_lock(&rac->m_mutex);
+ ret = !rac->m_dma_active ? nthw_rac_rab_dma_begin(rac) : -1;
if (ret == 0) {
/* Announce the number of words to read from INF_DATA */
@@ -821,7 +844,8 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
}
nthw_rac_rab_read32_dma(rac, bus_id, address_bufctrl, 2, &bc_buf);
- ret = nthw_rac_rab_dma_commit(rac);
+ ret = rac->m_dma_active ? nthw_rac_rab_dma_commit(rac) : (assert(0), -1);
+ rte_spinlock_unlock(&rac->m_mutex);
if (ret != 0)
return ret;
@@ -847,6 +871,13 @@ int flm_nthw_inf_sta_data_update(const struct flm_nthw *p, uint32_t *inf_data,
*lrn_free = bc_buf.base[index & mask] & 0xffff;
*inf_avail = (bc_buf.base[index & mask] >> 16) & 0xffff;
*sta_avail = bc_buf.base[(index + 1) & mask] & 0xffff;
+ } else {
+ rte_spinlock_unlock(&rac->m_mutex);
+ const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
+ const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
+ NT_LOG(ERR, NTHW,
+ "%s: DMA begin requested, but a DMA transaction is already active",
+ p_adapter_id_str);
}
return ret;
diff --git a/drivers/net/ntnic/nthw/nthw_rac.c b/drivers/net/ntnic/nthw/nthw_rac.c
index 461da8e104..ca6aba6db2 100644
--- a/drivers/net/ntnic/nthw/nthw_rac.c
+++ b/drivers/net/ntnic/nthw/nthw_rac.c
@@ -3,6 +3,7 @@
* Copyright(c) 2023 Napatech A/S
*/
+#include "rte_spinlock.h"
#include "nt_util.h"
#include "ntlog.h"
@@ -10,8 +11,6 @@
#include "nthw_register.h"
#include "nthw_rac.h"
-#include <pthread.h>
-
#define RAB_DMA_WAIT (1000000)
#define RAB_READ (0x01)
@@ -217,7 +216,7 @@ int nthw_rac_init(nthw_rac_t *p, nthw_fpga_t *p_fpga, struct fpga_info_s *p_fpga
}
}
- pthread_mutex_init(&p->m_mutex, NULL);
+ rte_spinlock_init(&p->m_mutex);
return 0;
}
@@ -389,19 +388,6 @@ void nthw_rac_bar0_write32(const struct fpga_info_s *p_fpga_info, uint32_t reg_a
int nthw_rac_rab_dma_begin(nthw_rac_t *p)
{
- const struct fpga_info_s *const p_fpga_info = p->mp_fpga->p_fpga_info;
- const char *const p_adapter_id_str = p_fpga_info->mp_adapter_id_str;
-
- pthread_mutex_lock(&p->m_mutex);
-
- if (p->m_dma_active) {
- pthread_mutex_unlock(&p->m_mutex);
- NT_LOG(ERR, NTHW,
- "%s: DMA begin requested, but a DMA transaction is already active",
- p_adapter_id_str);
- return -1;
- }
-
p->m_dma_active = true;
return 0;
@@ -454,19 +440,11 @@ int nthw_rac_rab_dma_commit(nthw_rac_t *p)
{
int ret;
- if (!p->m_dma_active) {
- /* Expecting mutex not to be locked! */
- assert(0); /* alert developer that something is wrong */
- return -1;
- }
-
nthw_rac_rab_dma_activate(p);
ret = nthw_rac_rab_dma_wait(p);
p->m_dma_active = false;
- pthread_mutex_unlock(&p->m_mutex);
-
return ret;
}
@@ -602,7 +580,7 @@ int nthw_rac_rab_write32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint
return -1;
}
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
if (p->m_dma_active) {
NT_LOG(ERR, NTHW, "%s: RAB: Illegal operation: DMA enabled", p_adapter_id_str);
@@ -748,7 +726,7 @@ int nthw_rac_rab_write32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint
}
exit_unlock_res:
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
@@ -763,7 +741,7 @@ int nthw_rac_rab_read32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint3
uint32_t out_buf_free;
int res = 0;
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
if (address > (1 << RAB_ADDR_BW)) {
NT_LOG(ERR, NTHW, "%s: RAB: Illegal address: value too large %d - max %d",
@@ -923,7 +901,7 @@ int nthw_rac_rab_read32(nthw_rac_t *p, bool trc, nthw_rab_bus_id_t bus_id, uint3
}
exit_unlock_res:
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
@@ -935,7 +913,7 @@ int nthw_rac_rab_flush(nthw_rac_t *p)
uint32_t retry;
int res = 0;
- pthread_mutex_lock(&p->m_mutex);
+ rte_spinlock_lock(&p->m_mutex);
/* Set the flush bit */
nthw_rac_reg_write32(p_fpga_info, p->RAC_RAB_BUF_USED_ADDR,
@@ -960,6 +938,6 @@ int nthw_rac_rab_flush(nthw_rac_t *p)
/* Clear flush bit when done */
nthw_rac_reg_write32(p_fpga_info, p->RAC_RAB_BUF_USED_ADDR, 0x0);
- pthread_mutex_unlock(&p->m_mutex);
+ rte_spinlock_unlock(&p->m_mutex);
return res;
}
diff --git a/drivers/net/ntnic/nthw/nthw_rac.h b/drivers/net/ntnic/nthw/nthw_rac.h
index c64dac9da9..df92b487af 100644
--- a/drivers/net/ntnic/nthw/nthw_rac.h
+++ b/drivers/net/ntnic/nthw/nthw_rac.h
@@ -16,7 +16,7 @@ struct nthw_rac {
nthw_fpga_t *mp_fpga;
nthw_module_t *mp_mod_rac;
- pthread_mutex_t m_mutex;
+ rte_spinlock_t m_mutex;
int mn_param_rac_rab_interfaces;
int mn_param_rac_rab_ob_update;
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index df9ee77e06..91669caceb 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -18,6 +18,7 @@
#include <sys/queue.h>
+#include "rte_spinlock.h"
#include "ntlog.h"
#include "ntdrv_4ga.h"
#include "ntos_drv.h"
@@ -236,7 +237,7 @@ static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s
if (!p_nthw_stat || !p_nt4ga_stat || n_intf_no < 0 || n_intf_no > NUM_ADAPTER_PORTS_MAX)
return -1;
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
/* Rx */
for (i = 0; i < internals->nb_rx_queues; i++) {
@@ -256,7 +257,7 @@ static int dpdk_stats_reset(struct pmd_internals *internals, struct ntdrv_4ga_s
p_nt4ga_stat->n_totals_reset_timestamp = time(NULL);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
@@ -1519,9 +1520,9 @@ static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *sta
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nb_xstats = ntnic_xstats_ops->nthw_xstats_get(p_nt4ga_stat, stats, n, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return nb_xstats;
}
@@ -1544,10 +1545,10 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nb_xstats =
ntnic_xstats_ops->nthw_xstats_get_by_id(p_nt4ga_stat, ids, values, n, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return nb_xstats;
}
@@ -1566,9 +1567,9 @@ static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
return -1;
}
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
ntnic_xstats_ops->nthw_xstats_reset(p_nt4ga_stat, if_index);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return dpdk_stats_reset(internals, p_nt_drv, if_index);
}
@@ -1749,14 +1750,14 @@ THREAD_FUNC port_event_thread_fn(void *context)
if (p_nt4ga_stat->flm_stat_ver > 22 && p_nt4ga_stat->mp_stat_structs_flm) {
if (flmdata.lookup != p_nt4ga_stat->mp_stat_structs_flm->load_lps ||
flmdata.access != p_nt4ga_stat->mp_stat_structs_flm->load_aps) {
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
flmdata.lookup = p_nt4ga_stat->mp_stat_structs_flm->load_lps;
flmdata.access = p_nt4ga_stat->mp_stat_structs_flm->load_aps;
flmdata.lookup_maximum =
p_nt4ga_stat->mp_stat_structs_flm->max_lps;
flmdata.access_maximum =
p_nt4ga_stat->mp_stat_structs_flm->max_aps;
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
rte_eth_dev_callback_process(eth_dev,
@@ -1773,7 +1774,7 @@ THREAD_FUNC port_event_thread_fn(void *context)
if (p_nt4ga_stat->mp_port_load) {
if (portdata.rx_bps != p_nt4ga_stat->mp_port_load[port_no].rx_bps ||
portdata.tx_bps != p_nt4ga_stat->mp_port_load[port_no].tx_bps) {
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
portdata.rx_bps = p_nt4ga_stat->mp_port_load[port_no].rx_bps;
portdata.tx_bps = p_nt4ga_stat->mp_port_load[port_no].tx_bps;
portdata.rx_pps = p_nt4ga_stat->mp_port_load[port_no].rx_pps;
@@ -1786,7 +1787,7 @@ THREAD_FUNC port_event_thread_fn(void *context)
p_nt4ga_stat->mp_port_load[port_no].rx_bps_max;
portdata.tx_bps_maximum =
p_nt4ga_stat->mp_port_load[port_no].tx_bps_max;
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
if (eth_dev && eth_dev->data && eth_dev->data->dev_private) {
rte_eth_dev_callback_process(eth_dev,
@@ -1957,9 +1958,9 @@ THREAD_FUNC adapter_stat_thread_fn(void *context)
/* Check then collect */
{
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
nt4ga_stat_ops->nt4ga_stat_collect(&p_nt_drv->adapter_info, p_nt4ga_stat);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
}
}
@@ -2232,7 +2233,7 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
}
}
- pthread_mutex_init(&p_nt_drv->stat_lck, NULL);
+ rte_spinlock_init(&p_nt_drv->stat_lck);
res = THREAD_CTRL_CREATE(&p_nt_drv->stat_thread, "nt4ga_stat_thr", adapter_stat_thread_fn,
(void *)p_drv);
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 8edaccb65c..4c18088681 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -910,7 +910,7 @@ static int poll_statistics(struct pmd_internals *internals)
internals->last_stat_rtc = now_rtc;
- pthread_mutex_lock(&p_nt_drv->stat_lck);
+ rte_spinlock_lock(&p_nt_drv->stat_lck);
/*
* Add the RX statistics increments since last time we polled.
@@ -951,7 +951,7 @@ static int poll_statistics(struct pmd_internals *internals)
/* Globally only once a second */
if ((now_rtc - last_stat_rtc) < rte_tsc_freq) {
rte_spinlock_unlock(&hwlock);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
@@ -988,7 +988,7 @@ static int poll_statistics(struct pmd_internals *internals)
}
rte_spinlock_unlock(&hwlock);
- pthread_mutex_unlock(&p_nt_drv->stat_lck);
+ rte_spinlock_unlock(&p_nt_drv->stat_lck);
return 0;
}
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 75/80] net/ntnic: remove unnecessary
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (73 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 74/80] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 76/80] net/ntnic: add async create/destroy declaration Serhii Iliushyk
` (4 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev; +Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen
Type casting:
The dev_private has type void * and type casting is not necessary.
FLOW_DEBUG condition
Use the dynamic logging
Signed-off-by: Serhii Iliushyk <sil-plv@napatech.com>
---
drivers/net/ntnic/nthw/flow_api/flow_api.c | 4 --
.../net/ntnic/nthw/ntnic_meter/ntnic_meter.c | 18 +++----
drivers/net/ntnic/ntnic_ethdev.c | 48 +++++++++----------
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 16 +++----
4 files changed, 41 insertions(+), 45 deletions(-)
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 337902f654..ef8caefd9a 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -342,7 +342,6 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
ndev->flow_unique_id_counter = 0;
-#ifdef FLOW_DEBUG
/*
* free all resources default allocated, initially for this NIC DEV
* Is not really needed since the bitmap will be freed in a sec. Therefore
@@ -354,9 +353,7 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
for (unsigned int i = 0; i < RES_COUNT; i++) {
int err = 0;
-#if defined(FLOW_DEBUG)
NT_LOG(DBG, FILTER, "RES state for: %s", dbg_res_descr[i]);
-#endif
for (unsigned int ii = 0; ii < ndev->res[i].resource_count; ii++) {
int ref = ndev->res[i].ref[ii];
@@ -373,7 +370,6 @@ static void flow_ndev_reset(struct flow_nic_dev *ndev)
NT_LOG(DBG, FILTER, "ERROR - some resources not freed");
}
-#endif
}
int flow_delete_eth_dev(struct flow_eth_dev *eth_dev)
diff --git a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
index e4e8fe0c7d..33593927a4 100644
--- a/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
+++ b/drivers/net/ntnic/nthw/ntnic_meter/ntnic_meter.c
@@ -42,7 +42,7 @@ static int eth_mtr_capabilities_get_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -110,7 +110,7 @@ static int eth_mtr_meter_profile_add_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
@@ -161,7 +161,7 @@ static int eth_mtr_meter_profile_delete_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (meter_profile_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, NULL,
@@ -184,7 +184,7 @@ static int eth_mtr_meter_policy_add_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (policy_id >= profile_inline_ops->flow_mtr_meter_policy_n_max())
return -rte_mtr_error_set(error, EINVAL, RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL,
@@ -250,7 +250,7 @@ static int eth_mtr_create_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -316,7 +316,7 @@ static int eth_mtr_destroy_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -348,7 +348,7 @@ static int eth_mtr_stats_adjust_inline(struct rte_eth_dev *eth_dev,
const uint64_t adjust_bit = 1ULL << 63;
const uint64_t probe_bit = 1ULL << 62;
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
if (mtr_id >=
@@ -409,7 +409,7 @@ static int eth_mtr_stats_read_inline(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
uint8_t caller_id = get_caller_id(eth_dev->data->port_id);
@@ -445,7 +445,7 @@ static const struct rte_mtr_ops mtr_ops_inline = {
static int eth_mtr_ops_get(struct rte_eth_dev *eth_dev, void *ops)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
ntdrv_4ga_t *p_nt_drv = &internals->p_drv->ntdrv;
enum fpga_info_profile profile = p_nt_drv->adapter_info.fpga_info.profile;
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 91669caceb..068c3d932a 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -272,7 +272,7 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -302,14 +302,14 @@ eth_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
static int eth_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
dpdk_stats_collect(internals, stats);
return 0;
}
static int eth_stats_reset(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
const int if_index = internals->n_intf_no;
@@ -327,7 +327,7 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -957,14 +957,14 @@ static int deallocate_hw_virtio_queues(struct hwq_s *hwq)
static void eth_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_tx_queue *tx_q = &internals->txq_scg[queue_id];
deallocate_hw_virtio_queues(&tx_q->hwq);
}
static void eth_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_rx_queue *rx_q = &internals->rxq_scg[queue_id];
deallocate_hw_virtio_queues(&rx_q->hwq);
}
@@ -994,7 +994,7 @@ static int eth_rx_scg_queue_setup(struct rte_eth_dev *eth_dev,
{
NT_LOG_DBGX(DBG, NTNIC, "Rx queue setup");
struct rte_pktmbuf_pool_private *mbp_priv;
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct ntnic_rx_queue *rx_q = &internals->rxq_scg[rx_queue_id];
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
@@ -1062,7 +1062,7 @@ static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev,
}
NT_LOG_DBGX(DBG, NTNIC, "Tx queue setup");
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv;
struct ntnic_tx_queue *tx_q = &internals->txq_scg[tx_queue_id];
@@ -1185,7 +1185,7 @@ eth_mac_addr_add(struct rte_eth_dev *eth_dev,
if (index >= NUM_MAC_ADDRS_PER_PORT) {
const struct pmd_internals *const internals =
- (struct pmd_internals *)eth_dev->data->dev_private;
+ eth_dev->data->dev_private;
NT_LOG_DBGX(DBG, NTNIC, "Port %i: illegal index %u (>= %u)",
internals->n_intf_no, index, NUM_MAC_ADDRS_PER_PORT);
return -1;
@@ -1211,7 +1211,7 @@ eth_set_mc_addr_list(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
{
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct rte_ether_addr *const mc_addrs = internals->mc_addrs;
size_t i;
@@ -1252,7 +1252,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
@@ -1313,7 +1313,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
static int
eth_dev_stop(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
NT_LOG_DBGX(DBG, NTNIC, "Port %u", internals->n_intf_no);
@@ -1341,7 +1341,7 @@ eth_dev_set_link_up(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
const int port = internals->n_intf_no;
@@ -1367,7 +1367,7 @@ eth_dev_set_link_down(struct rte_eth_dev *eth_dev)
return -1;
}
- struct pmd_internals *const internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *const internals = eth_dev->data->dev_private;
struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info;
const int port = internals->n_intf_no;
@@ -1440,7 +1440,7 @@ drv_deinit(struct drv_s *p_drv)
static int
eth_dev_close(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
if (internals->type != PORT_TYPE_VIRTUAL) {
@@ -1478,7 +1478,7 @@ eth_dev_close(struct rte_eth_dev *eth_dev)
static int
eth_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, size_t fw_size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
if (internals->type == PORT_TYPE_VIRTUAL || internals->type == PORT_TYPE_OVERRIDE)
return 0;
@@ -1506,7 +1506,7 @@ static int dev_flow_ops_get(struct rte_eth_dev *dev __rte_unused, const struct r
static int eth_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *stats, unsigned int n)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1531,7 +1531,7 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
uint64_t *values,
unsigned int n)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1554,7 +1554,7 @@ static int eth_xstats_get_by_id(struct rte_eth_dev *eth_dev,
static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1576,7 +1576,7 @@ static int eth_xstats_reset(struct rte_eth_dev *eth_dev)
static int eth_xstats_get_names(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names, unsigned int size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1596,7 +1596,7 @@ static int eth_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
struct rte_eth_xstat_name *xstats_names,
unsigned int size)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
nt4ga_stat_t *p_nt4ga_stat = &p_nt_drv->adapter_info.nt4ga_stat;
@@ -1627,7 +1627,7 @@ static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_r
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct flow_nic_dev *ndev = internals->flw_dev->ndev;
struct nt_eth_rss_conf tmp_rss_conf = { 0 };
@@ -1662,7 +1662,7 @@ static int eth_dev_rss_hash_update(struct rte_eth_dev *eth_dev, struct rte_eth_r
static int rss_hash_conf_get(struct rte_eth_dev *eth_dev, struct rte_eth_rss_conf *rss_conf)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct flow_nic_dev *ndev = internals->flw_dev->ndev;
rss_conf->algorithm = (enum rte_eth_hash_function)ndev->rss_conf.algorithm;
@@ -1723,7 +1723,7 @@ static struct eth_dev_ops nthw_eth_dev_ops = {
*/
THREAD_FUNC port_event_thread_fn(void *context)
{
- struct pmd_internals *internals = (struct pmd_internals *)context;
+ struct pmd_internals *internals = context;
struct drv_s *p_drv = internals->p_drv;
ntdrv_4ga_t *p_nt_drv = &p_drv->ntdrv;
struct adapter_info_s *p_adapter_info = &p_nt_drv->adapter_info;
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 4c18088681..0e20606a41 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -491,7 +491,7 @@ static int convert_flow(struct rte_eth_dev *eth_dev,
struct cnv_action_s *action,
struct rte_flow_error *error)
{
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
static struct rte_flow_error flow_error = {
@@ -554,7 +554,7 @@ eth_flow_destroy(struct rte_eth_dev *eth_dev, struct rte_flow *flow, struct rte_
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -595,7 +595,7 @@ static struct rte_flow *eth_flow_create(struct rte_eth_dev *eth_dev,
return NULL;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
@@ -673,7 +673,7 @@ static int eth_flow_flush(struct rte_eth_dev *eth_dev, struct rte_flow_error *er
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -716,7 +716,7 @@ static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
.message = "none" };
@@ -724,7 +724,7 @@ static int eth_flow_actions_update(struct rte_eth_dev *eth_dev,
if (internals->flw_dev) {
struct pmd_internals *dev_private =
- (struct pmd_internals *)eth_dev->data->dev_private;
+ eth_dev->data->dev_private;
struct fpga_info_s *fpga_info = &dev_private->p_drv->ntdrv.adapter_info.fpga_info;
struct cnv_action_s action = { 0 };
@@ -780,7 +780,7 @@ static int eth_flow_dev_dump(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE, .message = "none" };
@@ -808,7 +808,7 @@ static int eth_flow_get_aged_flows(struct rte_eth_dev *eth_dev,
return -1;
}
- struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+ struct pmd_internals *internals = eth_dev->data->dev_private;
static struct rte_flow_error flow_error = {
.type = RTE_FLOW_ERROR_TYPE_NONE,
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 76/80] net/ntnic: add async create/destroy declaration
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (74 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 75/80] net/ntnic: remove unnecessary Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 77/80] net/ntnic: add async template declaration Serhii Iliushyk
` (3 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
implementation for async create and destroy flow.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 8 ++
drivers/net/ntnic/ntnic_ethdev.c | 1 +
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 105 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.c | 15 +++
drivers/net/ntnic/ntnic_mod_reg.h | 18 +++
5 files changed, 147 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index b40a27fbf1..505fb8e501 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -343,6 +343,14 @@ struct flow_handle {
};
};
+struct flow_pattern_template {
+};
+
+struct flow_actions_template {
+};
+struct flow_template_table {
+};
+
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
void km_free_ndev_resource_management(void **handle);
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 068c3d932a..77436eb02d 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -1252,6 +1252,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev)
return -1;
}
+ eth_dev->flow_fp_ops = get_dev_fp_flow_ops();
struct pmd_internals *internals = eth_dev->data->dev_private;
const int n_intf_no = internals->n_intf_no;
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index 0e20606a41..d1f3ed4831 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -4,6 +4,11 @@
*/
#include <rte_flow_driver.h>
+#include <rte_pci.h>
+#include <rte_version.h>
+#include <rte_flow.h>
+
+#include "ntlog.h"
#include "nt_util.h"
#include "create_elements.h"
#include "ntnic_mod_reg.h"
@@ -881,6 +886,96 @@ static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_por
return res;
}
+static struct rte_flow *eth_flow_async_create(struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_action_s action = { 0 };
+ struct cnv_match_s match = { 0 };
+
+ if (create_match_elements(&match, pattern, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in pattern");
+ return NULL;
+ }
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0)
+ queue_offset = internals->vpq[0].id;
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS, queue_offset) !=
+ 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return NULL;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return NULL;
+ }
+
+ struct flow_handle *res =
+ flow_filter_ops->flow_async_create(internals->flw_dev,
+ queue_id,
+ (const struct rte_flow_op_attr *)op_attr,
+ (struct flow_template_table *)template_table,
+ match.rte_flow_item,
+ pattern_template_index,
+ action.flow_actions,
+ actions_template_index,
+ user_data,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow *)res;
+}
+
+static int eth_flow_async_destroy(struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct rte_flow *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_async_destroy(internals->flw_dev,
+ queue_id,
+ (const struct rte_flow_op_attr *)op_attr,
+ (struct flow_handle *)flow,
+ user_data,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
static int poll_statistics(struct pmd_internals *internals)
{
int flow;
@@ -1017,3 +1112,13 @@ void dev_flow_init(void)
{
register_dev_flow_ops(&dev_flow_ops);
}
+
+static struct rte_flow_fp_ops async_dev_flow_ops = {
+ .async_create = eth_flow_async_create,
+ .async_destroy = eth_flow_async_destroy,
+};
+
+void dev_fp_flow_init(void)
+{
+ register_dev_fp_flow_ops(&async_dev_flow_ops);
+}
diff --git a/drivers/net/ntnic/ntnic_mod_reg.c b/drivers/net/ntnic/ntnic_mod_reg.c
index 10aa778a57..658fac72c0 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.c
+++ b/drivers/net/ntnic/ntnic_mod_reg.c
@@ -199,6 +199,21 @@ const struct flow_filter_ops *get_flow_filter_ops(void)
return flow_filter_ops;
}
+static const struct rte_flow_fp_ops *dev_fp_flow_ops;
+
+void register_dev_fp_flow_ops(const struct rte_flow_fp_ops *ops)
+{
+ dev_fp_flow_ops = ops;
+}
+
+const struct rte_flow_fp_ops *get_dev_fp_flow_ops(void)
+{
+ if (dev_fp_flow_ops == NULL)
+ dev_fp_flow_init();
+
+ return dev_fp_flow_ops;
+}
+
static const struct rte_flow_ops *dev_flow_ops;
void register_dev_flow_ops(const struct rte_flow_ops *ops)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 563e62ebce..572da11d02 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -7,6 +7,7 @@
#define __NTNIC_MOD_REG_H__
#include <stdint.h>
+#include <rte_flow.h>
#include "rte_ethdev.h"
#include "rte_flow_driver.h"
@@ -426,6 +427,19 @@ struct flow_filter_ops {
uint32_t nb_contexts,
struct rte_flow_error *error);
+ /*
+ * RTE flow asynchronous operations functions
+ */
+ struct flow_handle *(*flow_async_create)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error);
+
+ int (*flow_async_destroy)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error);
+
int (*flow_info_get)(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info, struct rte_flow_queue_info *queue_info,
struct rte_flow_error *error);
@@ -436,6 +450,10 @@ struct flow_filter_ops {
struct rte_flow_error *error);
};
+void register_dev_fp_flow_ops(const struct rte_flow_fp_ops *ops);
+const struct rte_flow_fp_ops *get_dev_fp_flow_ops(void);
+void dev_fp_flow_init(void);
+
void register_dev_flow_ops(const struct rte_flow_ops *ops);
const struct rte_flow_ops *get_dev_flow_ops(void);
void dev_flow_init(void);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 77/80] net/ntnic: add async template declaration
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (75 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 76/80] net/ntnic: add async create/destroy declaration Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 78/80] net/ntnic: add async flow create/delete implementation Serhii Iliushyk
` (2 subsequent siblings)
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
rte_flow_ops was exnteded with next features support
1. flow pattern template create
2. flow pattern template destroy
3. flow actions template create
4. flow actions template destroy
5. flow template table create
6. flow template table destroy
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 224 ++++++++++++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 28 +++
2 files changed, 252 insertions(+)
diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
index d1f3ed4831..06b6ae442b 100644
--- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
+++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c
@@ -886,6 +886,224 @@ static int eth_flow_configure(struct rte_eth_dev *dev, const struct rte_flow_por
return res;
}
+static struct rte_flow_pattern_template *eth_flow_pattern_template_create(struct rte_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_match_s match = { 0 };
+ struct rte_flow_pattern_template_attr attr = {
+ .relaxed_matching = template_attr->relaxed_matching,
+ .ingress = template_attr->ingress,
+ .egress = template_attr->egress,
+ .transfer = template_attr->transfer,
+ };
+
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ if (create_match_elements(&match, pattern, MAX_ELEMENTS) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Error in pattern");
+ return NULL;
+ }
+
+ struct flow_pattern_template *res =
+ flow_filter_ops->flow_pattern_template_create(internals->flw_dev, &attr, caller_id,
+ match.rte_flow_item, &flow_error);
+
+ convert_error(error, &flow_error);
+ return (struct rte_flow_pattern_template *)res;
+}
+
+static int eth_flow_pattern_template_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_pattern_template_destroy(internals->flw_dev,
+ (struct flow_pattern_template *)
+ pattern_template,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
+static struct rte_flow_actions_template *eth_flow_actions_template_create(struct rte_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ struct fpga_info_s *fpga_info = &internals->p_drv->ntdrv.adapter_info.fpga_info;
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct cnv_action_s action = { 0 };
+ struct cnv_action_s mask = { 0 };
+ struct rte_flow_actions_template_attr attr = {
+ .ingress = template_attr->ingress,
+ .egress = template_attr->egress,
+ .transfer = template_attr->transfer,
+ };
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
+ uint32_t queue_offset = 0;
+
+ if (internals->type == PORT_TYPE_OVERRIDE && internals->vpq_nb_vq > 0)
+ queue_offset = internals->vpq[0].id;
+
+ if (create_action_elements_inline(&action, actions, MAX_ACTIONS, queue_offset) !=
+ 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in actions");
+ return NULL;
+ }
+
+ if (create_action_elements_inline(&mask, masks, MAX_ACTIONS, queue_offset) != 0) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Error in masks");
+ return NULL;
+ }
+
+ } else {
+ rte_flow_error_set(error, EPERM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Unsupported adapter profile");
+ return NULL;
+ }
+
+ struct flow_actions_template *res =
+ flow_filter_ops->flow_actions_template_create(internals->flw_dev, &attr, caller_id,
+ action.flow_actions,
+ mask.flow_actions, &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow_actions_template *)res;
+}
+
+static int eth_flow_actions_template_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_actions_template_destroy(internals->flw_dev,
+ (struct flow_actions_template *)
+ actions_template,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
+static struct rte_flow_template_table *eth_flow_template_table_create(struct rte_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr,
+ struct rte_flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct rte_flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return NULL;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ struct rte_flow_template_table_attr attr = {
+ .flow_attr = {
+ .group = table_attr->flow_attr.group,
+ .priority = table_attr->flow_attr.priority,
+ .ingress = table_attr->flow_attr.ingress,
+ .egress = table_attr->flow_attr.egress,
+ .transfer = table_attr->flow_attr.transfer,
+ },
+ .nb_flows = table_attr->nb_flows,
+ };
+ uint16_t forced_vlan_vid = 0;
+ uint16_t caller_id = get_caller_id(dev->data->port_id);
+
+ struct flow_template_table *res =
+ flow_filter_ops->flow_template_table_create(internals->flw_dev, &attr,
+ forced_vlan_vid, caller_id,
+ (struct flow_pattern_template **)pattern_templates,
+ nb_pattern_templates, (struct flow_actions_template **)actions_templates,
+ nb_actions_templates, &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return (struct rte_flow_template_table *)res;
+}
+
+static int eth_flow_template_table_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ const struct flow_filter_ops *flow_filter_ops = get_flow_filter_ops();
+
+ if (flow_filter_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "flow_filter module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = dev->data->dev_private;
+
+ static struct rte_flow_error rte_flow_error = { .type = RTE_FLOW_ERROR_TYPE_NONE,
+ .message = "none" };
+
+ int res = flow_filter_ops->flow_template_table_destroy(internals->flw_dev,
+ (struct flow_template_table *)
+ template_table,
+ &rte_flow_error);
+
+ convert_error(error, &rte_flow_error);
+ return res;
+}
+
static struct rte_flow *eth_flow_async_create(struct rte_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct rte_flow_template_table *template_table, const struct rte_flow_item pattern[],
@@ -1106,6 +1324,12 @@ static const struct rte_flow_ops dev_flow_ops = {
.get_aged_flows = eth_flow_get_aged_flows,
.info_get = eth_flow_info_get,
.configure = eth_flow_configure,
+ .pattern_template_create = eth_flow_pattern_template_create,
+ .pattern_template_destroy = eth_flow_pattern_template_destroy,
+ .actions_template_create = eth_flow_actions_template_create,
+ .actions_template_destroy = eth_flow_actions_template_destroy,
+ .template_table_create = eth_flow_template_table_create,
+ .template_table_destroy = eth_flow_template_table_destroy,
};
void dev_flow_init(void)
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 572da11d02..92856b81d5 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -430,6 +430,34 @@ struct flow_filter_ops {
/*
* RTE flow asynchronous operations functions
*/
+ struct flow_pattern_template *(*flow_pattern_template_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+ int (*flow_pattern_template_destroy)(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+ struct flow_actions_template *(*flow_actions_template_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error);
+
+ int (*flow_actions_template_destroy)(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+ struct flow_template_table *(*flow_template_table_create)(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+ int (*flow_template_table_destroy)(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *(*flow_async_create)(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 78/80] net/ntnic: add async flow create/delete implementation
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (76 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 77/80] net/ntnic: add async template declaration Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 79/80] net/ntnic: add async template implementation Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 80/80] net/ntnic: add MTU configuration Serhii Iliushyk
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
Inline profile was extended with async flow create and delete features
implementation.
async create and destroy was added to the flow filter ops.
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 36 +++
drivers/net/ntnic/nthw/flow_api/flow_api.c | 39 +++
.../profile_inline/flow_api_hw_db_inline.c | 13 +
.../profile_inline/flow_api_hw_db_inline.h | 2 +
.../profile_inline/flow_api_profile_inline.c | 248 +++++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 14 +
drivers/net/ntnic/ntnic_mod_reg.h | 15 ++
9 files changed, 368 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index afdaf22e0b..fa6cd2b95c 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -70,6 +70,7 @@ Features
- Flow aging support
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
+- Asynchronous flow support
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 735a295f6e..13f7dada4b 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -167,6 +167,7 @@ New Features
* Added age rte flow action support
* Added meter flow metering and flow policy support
* Added flow actions update support
+ * Added asynchronous flow support
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 505fb8e501..6935ff483a 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -339,6 +339,12 @@ struct flow_handle {
uint8_t flm_rqi;
uint8_t flm_qfi;
uint8_t flm_scrub_prof;
+
+ /* Flow specific pointer to application template table cell stored during
+ * flow create.
+ */
+ struct flow_template_table_cell *template_table_cell;
+ bool flm_async;
};
};
};
@@ -347,8 +353,38 @@ struct flow_pattern_template {
};
struct flow_actions_template {
+ struct nic_flow_def *fd;
+
+ uint32_t num_dest_port;
+ uint32_t num_queues;
};
+
+struct flow_template_table_cell {
+ atomic_int status;
+ atomic_int counter;
+
+ uint32_t flm_db_idx_counter;
+ uint32_t flm_db_idxs[RES_COUNT];
+
+ uint32_t flm_key_id;
+ uint32_t flm_ft;
+
+ uint16_t flm_rpl_ext_ptr;
+ uint8_t flm_scrub_prof;
+};
+
struct flow_template_table {
+ struct flow_pattern_template **pattern_templates;
+ uint8_t nb_pattern_templates;
+
+ struct flow_actions_template **actions_templates;
+ uint8_t nb_actions_templates;
+
+ struct flow_template_table_cell *pattern_action_pairs;
+
+ struct rte_flow_attr attr;
+ uint16_t forced_vlan_vid;
+ uint16_t caller_id;
};
void km_attach_ndev_resource_management(struct km_flow_def_s *km, void **handle);
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index ef8caefd9a..884a59a5de 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1077,6 +1077,43 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
nb_queue, queue_attr, error);
}
+/*
+ * Flow Asynchronous operation API
+ */
+
+static struct flow_handle *
+flow_async_create(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_template_table *template_table,
+ const struct rte_flow_item pattern[], uint8_t pattern_template_index,
+ const struct rte_flow_action actions[], uint8_t actions_template_index, void *user_data,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_async_create_profile_inline(dev, queue_id, op_attr,
+ template_table, pattern, pattern_template_index, actions,
+ actions_template_index, user_data, error);
+}
+
+static int flow_async_destroy(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_async_destroy_profile_inline(dev, queue_id, op_attr, flow,
+ user_data, error);
+}
int flow_get_flm_stats(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size)
{
const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
@@ -1113,6 +1150,8 @@ static const struct flow_filter_ops ops = {
*/
.flow_info_get = flow_info_get,
.flow_configure = flow_configure,
+ .flow_async_create = flow_async_create,
+ .flow_async_destroy = flow_async_destroy,
/*
* Other
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
index 2fee6ae6b5..ffab643f56 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.c
@@ -393,6 +393,19 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
}
}
+struct hw_db_idx *hw_db_inline_find_idx(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size)
+{
+ (void)ndev;
+ (void)db_handle;
+ for (uint32_t i = 0; i < size; ++i) {
+ if (idxs[i].type == type)
+ return &idxs[i];
+ }
+
+ return NULL;
+}
+
void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
uint32_t size, FILE *file)
{
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
index c920d36cfd..aa046b68a7 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_hw_db_inline.h
@@ -287,6 +287,8 @@ void hw_db_inline_deref_idxs(struct flow_nic_dev *ndev, void *db_handle, struct
uint32_t size);
const void *hw_db_inline_find_data(struct flow_nic_dev *ndev, void *db_handle,
enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
+struct hw_db_idx *hw_db_inline_find_idx(struct flow_nic_dev *ndev, void *db_handle,
+ enum hw_db_idx_type type, struct hw_db_idx *idxs, uint32_t size);
void hw_db_inline_dump(struct flow_nic_dev *ndev, void *db_handle, const struct hw_db_idx *idxs,
uint32_t size, FILE *file);
void hw_db_inline_dump_cfn(struct flow_nic_dev *ndev, void *db_handle, FILE *file);
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index f9133ad802..5d1244bddf 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -3,7 +3,6 @@
* Copyright(c) 2023 Napatech A/S
*/
-#include "generic/rte_spinlock.h"
#include "ntlog.h"
#include "nt_util.h"
@@ -64,6 +63,11 @@
#define POLICING_PARAMETER_OFFSET 4096
#define SIZE_CONVERTER 1099.511627776
+#define CELL_STATUS_UNINITIALIZED 0
+#define CELL_STATUS_INITIALIZING 1
+#define CELL_STATUS_INITIALIZED_TYPE_FLOW 2
+#define CELL_STATUS_INITIALIZED_TYPE_FLM 3
+
struct flm_mtr_stat_s {
struct dual_buckets_s *buckets;
atomic_uint_fast64_t n_pkt;
@@ -1034,6 +1038,17 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
return 0;
}
+static inline const void *memcpy_or(void *dest, const void *src, size_t count)
+{
+ unsigned char *dest_ptr = (unsigned char *)dest;
+ const unsigned char *src_ptr = (const unsigned char *)src;
+
+ for (size_t i = 0; i < count; ++i)
+ dest_ptr[i] |= src_ptr[i];
+
+ return dest;
+}
+
/*
* This function must be callable without locking any mutexes
*/
@@ -4341,6 +4356,9 @@ int flow_destroy_profile_inline(struct flow_eth_dev *dev, struct flow_handle *fl
{
int err = 0;
+ if (flow && flow->type == FLOW_HANDLE_TYPE_FLM && flow->flm_async)
+ return flow_async_destroy_profile_inline(dev, 0, NULL, flow, NULL, error);
+
flow_nic_set_error(ERR_SUCCESS, error);
if (flow) {
@@ -5485,6 +5503,232 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
+struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev,
+ uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ const struct rte_flow_action actions[],
+ uint8_t actions_template_index,
+ void *user_data,
+ struct rte_flow_error *error)
+{
+ (void)queue_id;
+ (void)op_attr;
+ struct flow_handle *fh = NULL;
+ int res, status;
+
+ const uint32_t pattern_action_index =
+ (uint32_t)template_table->nb_actions_templates * pattern_template_index +
+ actions_template_index;
+ struct flow_template_table_cell *pattern_action_pair =
+ &template_table->pattern_action_pairs[pattern_action_index];
+
+ uint32_t num_dest_port =
+ template_table->actions_templates[actions_template_index]->num_dest_port;
+ uint32_t num_queues =
+ template_table->actions_templates[actions_template_index]->num_queues;
+
+ uint32_t port_id = UINT32_MAX;
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct nic_flow_def *fd = malloc(sizeof(struct nic_flow_def));
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ goto err_exit;
+ }
+
+ memcpy(fd, template_table->actions_templates[actions_template_index]->fd,
+ sizeof(struct nic_flow_def));
+
+ res = interpret_flow_elements(dev, pattern, fd, error,
+ template_table->forced_vlan_vid, &port_id, packet_data,
+ packet_mask, &key_def);
+
+ if (res)
+ goto err_exit;
+
+ if (port_id == UINT32_MAX)
+ port_id = dev->port_id;
+
+ {
+ uint32_t num_dest_port_tmp = 0;
+ uint32_t num_queues_tmp = 0;
+
+ struct nic_flow_def action_fd = { 0 };
+ prepare_nic_flow_def(&action_fd);
+
+ res = interpret_flow_actions(dev, actions, NULL, &action_fd, error,
+ &num_dest_port_tmp, &num_queues_tmp);
+
+ if (res)
+ goto err_exit;
+
+ /* Copy FLM unique actions: modify_field, meter, encap/decap and age */
+ memcpy_or(fd->mtr_ids, action_fd.mtr_ids, sizeof(action_fd.mtr_ids));
+ memcpy_or(&fd->tun_hdr, &action_fd.tun_hdr, sizeof(struct tunnel_header_s));
+ memcpy_or(fd->modify_field, action_fd.modify_field,
+ sizeof(action_fd.modify_field));
+ fd->modify_field_count = action_fd.modify_field_count;
+ memcpy_or(&fd->age, &action_fd.age, sizeof(struct rte_flow_action_age));
+ }
+
+ status = atomic_load(&pattern_action_pair->status);
+
+ /* Initializing template entry */
+ if (status < CELL_STATUS_INITIALIZED_TYPE_FLOW) {
+ if (status == CELL_STATUS_UNINITIALIZED &&
+ atomic_compare_exchange_strong(&pattern_action_pair->status, &status,
+ CELL_STATUS_INITIALIZING)) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ fh = create_flow_filter(dev, fd, &template_table->attr,
+ template_table->forced_vlan_vid, template_table->caller_id,
+ error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (fh == NULL) {
+ /* reset status to CELL_STATUS_UNINITIALIZED to avoid a deadlock */
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_UNINITIALIZED);
+ goto err_exit;
+ }
+
+ if (fh->type == FLOW_HANDLE_TYPE_FLM) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ struct hw_db_idx *flm_ft_idx =
+ hw_db_inline_find_idx(dev->ndev, dev->ndev->hw_db_handle,
+ HW_DB_IDX_TYPE_FLM_FT,
+ (struct hw_db_idx *)fh->flm_db_idxs,
+ fh->flm_db_idx_counter);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ pattern_action_pair->flm_db_idx_counter = fh->flm_db_idx_counter;
+ memcpy(pattern_action_pair->flm_db_idxs, fh->flm_db_idxs,
+ sizeof(struct hw_db_idx) * fh->flm_db_idx_counter);
+
+ pattern_action_pair->flm_key_id = fh->flm_kid;
+ pattern_action_pair->flm_ft = flm_ft_idx->id1;
+
+ pattern_action_pair->flm_rpl_ext_ptr = fh->flm_rpl_ext_ptr;
+ pattern_action_pair->flm_scrub_prof = fh->flm_scrub_prof;
+
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_INITIALIZED_TYPE_FLM);
+
+ /* increment template table cell reference */
+ atomic_fetch_add(&pattern_action_pair->counter, 1);
+ fh->template_table_cell = pattern_action_pair;
+ fh->flm_async = true;
+
+ } else {
+ atomic_store(&pattern_action_pair->status,
+ CELL_STATUS_INITIALIZED_TYPE_FLOW);
+ }
+
+ } else {
+ do {
+ nt_os_wait_usec(1);
+ status = atomic_load(&pattern_action_pair->status);
+ } while (status == CELL_STATUS_INITIALIZING);
+
+ /* error handling in case that create_flow_filter() will fail in the other
+ * thread
+ */
+ if (status == CELL_STATUS_UNINITIALIZED)
+ goto err_exit;
+ }
+ }
+
+ /* FLM learn */
+ if (fh == NULL && status == CELL_STATUS_INITIALIZED_TYPE_FLM) {
+ fh = calloc(1, sizeof(struct flow_handle));
+
+ fh->type = FLOW_HANDLE_TYPE_FLM;
+ fh->dev = dev;
+ fh->caller_id = template_table->caller_id;
+ fh->user_data = user_data;
+
+ copy_fd_to_fh_flm(fh, fd, packet_data, pattern_action_pair->flm_key_id,
+ pattern_action_pair->flm_ft,
+ pattern_action_pair->flm_rpl_ext_ptr,
+ pattern_action_pair->flm_scrub_prof,
+ template_table->attr.priority & 0x3);
+
+ free(fd);
+
+ flm_flow_programming(fh, NT_FLM_OP_LEARN);
+
+ nic_insert_flow_flm(dev->ndev, fh);
+
+ /* increment template table cell reference */
+ atomic_fetch_add(&pattern_action_pair->counter, 1);
+ fh->template_table_cell = pattern_action_pair;
+ fh->flm_async = true;
+
+ } else if (fh == NULL) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+
+ fh = create_flow_filter(dev, fd, &template_table->attr,
+ template_table->forced_vlan_vid, template_table->caller_id,
+ error, port_id, num_dest_port, num_queues, packet_data,
+ packet_mask, &key_def);
+
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (fh == NULL)
+ goto err_exit;
+ }
+
+ if (fh) {
+ fh->caller_id = template_table->caller_id;
+ fh->user_data = user_data;
+ }
+
+ return fh;
+
+err_exit:
+ free(fd);
+ free(fh);
+
+ return NULL;
+}
+
+int flow_async_destroy_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error)
+{
+ (void)queue_id;
+ (void)op_attr;
+ (void)user_data;
+
+ if (flow->type == FLOW_HANDLE_TYPE_FLOW)
+ return flow_destroy_profile_inline(dev, flow, error);
+
+ if (flm_flow_programming(flow, NT_FLM_OP_UNLEARN)) {
+ NT_LOG(ERR, FILTER, "FAILED to destroy flow: %p", flow);
+ flow_nic_set_error(ERR_REMOVE_FLOW_FAILED, error);
+ return -1;
+ }
+
+ nic_remove_flow_flm(dev->ndev, flow);
+
+ free(flow);
+
+ return 0;
+}
+
static const struct profile_inline_ops ops = {
/*
* Management
@@ -5509,6 +5753,8 @@ static const struct profile_inline_ops ops = {
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
.flow_info_get_profile_inline = flow_info_get_profile_inline,
.flow_configure_profile_inline = flow_configure_profile_inline,
+ .flow_async_create_profile_inline = flow_async_create_profile_inline,
+ .flow_async_destroy_profile_inline = flow_async_destroy_profile_inline,
/*
* NT Flow FLM Meter API
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 8a03be1ab7..b548142342 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -69,6 +69,20 @@ int flow_nic_set_hasher_fields_inline(struct flow_nic_dev *ndev,
int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data, uint64_t size);
+/*
+ * RTE flow asynchronous operations functions
+ */
+
+struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t actions_template_index, void *user_data, struct rte_flow_error *error);
+
+int flow_async_destroy_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr, struct flow_handle *flow,
+ void *user_data, struct rte_flow_error *error);
+
int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info,
struct rte_flow_queue_info *queue_info, struct rte_flow_error *error);
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index 92856b81d5..e8e7090661 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -310,6 +310,21 @@ struct profile_inline_ops {
uint32_t nb_contexts,
struct rte_flow_error *error);
+ /*
+ * RTE flow asynchronous operations functions
+ */
+
+ struct flow_handle *(*flow_async_create_profile_inline)(struct flow_eth_dev *dev,
+ uint32_t queue_id, const struct rte_flow_op_attr *op_attr,
+ struct flow_template_table *template_table, const struct rte_flow_item pattern[],
+ uint8_t rte_pattern_template_index, const struct rte_flow_action actions[],
+ uint8_t rte_actions_template_index, void *user_data, struct rte_flow_error *error);
+
+ int (*flow_async_destroy_profile_inline)(struct flow_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct flow_handle *flow, void *user_data,
+ struct rte_flow_error *error);
+
int (*flow_nic_set_hasher_fields_inline)(struct flow_nic_dev *ndev,
int hsh_idx,
struct nt_eth_rss_conf rss_conf);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 79/80] net/ntnic: add async template implementation
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (77 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 78/80] net/ntnic: add async flow create/delete implementation Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 80/80] net/ntnic: add MTU configuration Serhii Iliushyk
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Danylo Vodopianov
From: Danylo Vodopianov <dvo-plv@napatech.com>
flow filter ops and inline API was exnteded with next APIs:
1. flow pattern template create
2. flow pattern template destroy
3. flow actions template create
4. flow actions template destroy
5. flow template table create
6. flow template table destroy
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
---
drivers/net/ntnic/include/flow_api_engine.h | 1 +
drivers/net/ntnic/nthw/flow_api/flow_api.c | 104 ++++++++
.../profile_inline/flow_api_profile_inline.c | 225 ++++++++++++++++++
.../profile_inline/flow_api_profile_inline.h | 28 +++
drivers/net/ntnic/ntnic_mod_reg.h | 30 +++
5 files changed, 388 insertions(+)
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 6935ff483a..8604dde995 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -350,6 +350,7 @@ struct flow_handle {
};
struct flow_pattern_template {
+ struct nic_flow_def *fd;
};
struct flow_actions_template {
diff --git a/drivers/net/ntnic/nthw/flow_api/flow_api.c b/drivers/net/ntnic/nthw/flow_api/flow_api.c
index 884a59a5de..98b6e49755 100644
--- a/drivers/net/ntnic/nthw/flow_api/flow_api.c
+++ b/drivers/net/ntnic/nthw/flow_api/flow_api.c
@@ -1081,6 +1081,104 @@ static int flow_configure(struct flow_eth_dev *dev, uint8_t caller_id,
* Flow Asynchronous operation API
*/
+static struct flow_pattern_template *
+flow_pattern_template_create(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_pattern_template_create_profile_inline(dev, template_attr,
+ caller_id, pattern, error);
+}
+
+static int flow_pattern_template_destroy(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_pattern_template_destroy_profile_inline(dev,
+ pattern_template,
+ error);
+}
+
+static struct flow_actions_template *
+flow_actions_template_create(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_actions_template_create_profile_inline(dev, template_attr,
+ caller_id, actions, masks, error);
+}
+
+static int flow_actions_template_destroy(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_actions_template_destroy_profile_inline(dev,
+ actions_template,
+ error);
+}
+
+static struct flow_template_table *flow_template_table_create(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id, struct flow_pattern_template *pattern_templates[],
+ uint8_t nb_pattern_templates, struct flow_actions_template *actions_templates[],
+ uint8_t nb_actions_templates, struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return NULL;
+ }
+
+ return profile_inline_ops->flow_template_table_create_profile_inline(dev, table_attr,
+ forced_vlan_vid, caller_id, pattern_templates, nb_pattern_templates,
+ actions_templates, nb_actions_templates, error);
+}
+
+static int flow_template_table_destroy(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, FILTER, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ return profile_inline_ops->flow_template_table_destroy_profile_inline(dev, template_table,
+ error);
+}
+
static struct flow_handle *
flow_async_create(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr, struct flow_template_table *template_table,
@@ -1150,6 +1248,12 @@ static const struct flow_filter_ops ops = {
*/
.flow_info_get = flow_info_get,
.flow_configure = flow_configure,
+ .flow_pattern_template_create = flow_pattern_template_create,
+ .flow_pattern_template_destroy = flow_pattern_template_destroy,
+ .flow_actions_template_create = flow_actions_template_create,
+ .flow_actions_template_destroy = flow_actions_template_destroy,
+ .flow_template_table_create = flow_template_table_create,
+ .flow_template_table_destroy = flow_template_table_destroy,
.flow_async_create = flow_async_create,
.flow_async_destroy = flow_async_destroy,
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 5d1244bddf..9fd943365f 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -5503,6 +5503,223 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
return -1;
}
+struct flow_pattern_template *flow_pattern_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error)
+{
+ (void)template_attr;
+ (void)caller_id;
+ uint32_t port_id = 0;
+ uint32_t packet_data[10];
+ uint32_t packet_mask[10];
+ struct flm_flow_key_def_s key_def;
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ return NULL;
+ }
+
+ /* Note that forced_vlan_vid is unavailable at this point in time */
+ int res = interpret_flow_elements(dev, pattern, fd, error, 0, &port_id, packet_data,
+ packet_mask, &key_def);
+
+ if (res) {
+ free(fd);
+ return NULL;
+ }
+
+ struct flow_pattern_template *template = calloc(1, sizeof(struct flow_pattern_template));
+
+ template->fd = fd;
+
+ return template;
+}
+
+int flow_pattern_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ free(pattern_template->fd);
+ free(pattern_template);
+
+ return 0;
+}
+
+struct flow_actions_template *
+flow_actions_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[],
+ const struct rte_flow_action masks[],
+ struct rte_flow_error *error)
+{
+ (void)template_attr;
+ int res;
+
+ uint32_t num_dest_port = 0;
+ uint32_t num_queues = 0;
+
+ struct nic_flow_def *fd = allocate_nic_flow_def();
+
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ if (fd == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate flow_def";
+ return NULL;
+ }
+
+ res = interpret_flow_actions(dev, actions, masks, fd, error, &num_dest_port, &num_queues);
+
+ if (res) {
+ free(fd);
+ return NULL;
+ }
+
+ /* Translate group IDs */
+ if (fd->jump_to_group != UINT32_MAX) {
+ rte_spinlock_lock(&dev->ndev->mtx);
+ res = flow_group_translate_get(dev->ndev->group_handle, caller_id,
+ dev->port, fd->jump_to_group, &fd->jump_to_group);
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ if (res) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ free(fd);
+ return NULL;
+ }
+ }
+
+ struct flow_actions_template *template = calloc(1, sizeof(struct flow_actions_template));
+
+ template->fd = fd;
+ template->num_dest_port = num_dest_port;
+ template->num_queues = num_queues;
+
+ return template;
+}
+
+int flow_actions_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ free(actions_template->fd);
+ free(actions_template);
+
+ return 0;
+}
+
+struct flow_template_table *flow_template_table_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ struct flow_template_table *template_table = calloc(1, sizeof(struct flow_template_table));
+
+ if (template_table == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate template_table";
+ goto error_out;
+ }
+
+ template_table->pattern_templates =
+ malloc(sizeof(struct flow_pattern_template *) * nb_pattern_templates);
+ template_table->actions_templates =
+ malloc(sizeof(struct flow_actions_template *) * nb_actions_templates);
+ template_table->pattern_action_pairs =
+ calloc((uint32_t)nb_pattern_templates * nb_actions_templates,
+ sizeof(struct flow_template_table_cell));
+
+ if (template_table->pattern_templates == NULL ||
+ template_table->actions_templates == NULL ||
+ template_table->pattern_action_pairs == NULL) {
+ error->type = RTE_FLOW_ERROR_TYPE_UNSPECIFIED;
+ error->message = "Failed to allocate template_table variables";
+ goto error_out;
+ }
+
+ template_table->attr.priority = table_attr->flow_attr.priority;
+ template_table->attr.group = table_attr->flow_attr.group;
+ template_table->forced_vlan_vid = forced_vlan_vid;
+ template_table->caller_id = caller_id;
+
+ template_table->nb_pattern_templates = nb_pattern_templates;
+ template_table->nb_actions_templates = nb_actions_templates;
+
+ memcpy(template_table->pattern_templates, pattern_templates,
+ sizeof(struct flow_pattern_template *) * nb_pattern_templates);
+ memcpy(template_table->actions_templates, actions_templates,
+ sizeof(struct rte_flow_actions_template *) * nb_actions_templates);
+
+ rte_spinlock_lock(&dev->ndev->mtx);
+ int res =
+ flow_group_translate_get(dev->ndev->group_handle, caller_id, dev->port,
+ template_table->attr.group, &template_table->attr.group);
+ rte_spinlock_unlock(&dev->ndev->mtx);
+
+ /* Translate group IDs */
+ if (res) {
+ NT_LOG(ERR, FILTER, "ERROR: Could not get group resource");
+ flow_nic_set_error(ERR_MATCH_RESOURCE_EXHAUSTION, error);
+ goto error_out;
+ }
+
+ return template_table;
+
+error_out:
+
+ if (template_table) {
+ free(template_table->pattern_templates);
+ free(template_table->actions_templates);
+ free(template_table->pattern_action_pairs);
+ free(template_table);
+ }
+
+ return NULL;
+}
+
+int flow_template_table_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error)
+{
+ flow_nic_set_error(ERR_SUCCESS, error);
+
+ const uint32_t nb_cells =
+ template_table->nb_pattern_templates * template_table->nb_actions_templates;
+
+ for (uint32_t i = 0; i < nb_cells; ++i) {
+ struct flow_template_table_cell *cell = &template_table->pattern_action_pairs[i];
+
+ if (cell->flm_db_idx_counter > 0) {
+ hw_db_inline_deref_idxs(dev->ndev, dev->ndev->hw_db_handle,
+ (struct hw_db_idx *)cell->flm_db_idxs,
+ cell->flm_db_idx_counter);
+ }
+ }
+
+ free(template_table->pattern_templates);
+ free(template_table->actions_templates);
+ free(template_table->pattern_action_pairs);
+ free(template_table);
+
+ return 0;
+}
+
struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev,
uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
@@ -5753,6 +5970,14 @@ static const struct profile_inline_ops ops = {
.flow_get_flm_stats_profile_inline = flow_get_flm_stats_profile_inline,
.flow_info_get_profile_inline = flow_info_get_profile_inline,
.flow_configure_profile_inline = flow_configure_profile_inline,
+ .flow_pattern_template_create_profile_inline = flow_pattern_template_create_profile_inline,
+ .flow_pattern_template_destroy_profile_inline =
+ flow_pattern_template_destroy_profile_inline,
+ .flow_actions_template_create_profile_inline = flow_actions_template_create_profile_inline,
+ .flow_actions_template_destroy_profile_inline =
+ flow_actions_template_destroy_profile_inline,
+ .flow_template_table_create_profile_inline = flow_template_table_create_profile_inline,
+ .flow_template_table_destroy_profile_inline = flow_template_table_destroy_profile_inline,
.flow_async_create_profile_inline = flow_async_create_profile_inline,
.flow_async_destroy_profile_inline = flow_async_destroy_profile_inline,
/*
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index b548142342..0dc89085ec 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -73,6 +73,34 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
* RTE flow asynchronous operations functions
*/
+struct flow_pattern_template *flow_pattern_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+int flow_pattern_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+struct flow_actions_template *flow_actions_template_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_action actions[], const struct rte_flow_action masks[],
+ struct rte_flow_error *error);
+
+int flow_actions_template_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+struct flow_template_table *flow_template_table_create_profile_inline(struct flow_eth_dev *dev,
+ const struct rte_flow_template_table_attr *table_attr, uint16_t forced_vlan_vid,
+ uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+int flow_template_table_destroy_profile_inline(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *flow_async_create_profile_inline(struct flow_eth_dev *dev, uint32_t queue_id,
const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index e8e7090661..eb764356eb 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -314,6 +314,36 @@ struct profile_inline_ops {
* RTE flow asynchronous operations functions
*/
+ struct flow_pattern_template *(*flow_pattern_template_create_profile_inline)
+ (struct flow_eth_dev *dev,
+ const struct rte_flow_pattern_template_attr *template_attr, uint16_t caller_id,
+ const struct rte_flow_item pattern[], struct rte_flow_error *error);
+
+ int (*flow_pattern_template_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_pattern_template *pattern_template,
+ struct rte_flow_error *error);
+
+ struct flow_actions_template *(*flow_actions_template_create_profile_inline)
+ (struct flow_eth_dev *dev,
+ const struct rte_flow_actions_template_attr *template_attr,
+ uint16_t caller_id, const struct rte_flow_action actions[],
+ const struct rte_flow_action masks[], struct rte_flow_error *error);
+
+ int (*flow_actions_template_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_actions_template *actions_template,
+ struct rte_flow_error *error);
+
+ struct flow_template_table *(*flow_template_table_create_profile_inline)
+ (struct flow_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr,
+ uint16_t forced_vlan_vid, uint16_t caller_id,
+ struct flow_pattern_template *pattern_templates[], uint8_t nb_pattern_templates,
+ struct flow_actions_template *actions_templates[], uint8_t nb_actions_templates,
+ struct rte_flow_error *error);
+
+ int (*flow_template_table_destroy_profile_inline)(struct flow_eth_dev *dev,
+ struct flow_template_table *template_table,
+ struct rte_flow_error *error);
+
struct flow_handle *(*flow_async_create_profile_inline)(struct flow_eth_dev *dev,
uint32_t queue_id, const struct rte_flow_op_attr *op_attr,
struct flow_template_table *template_table, const struct rte_flow_item pattern[],
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
* [PATCH v5 80/80] net/ntnic: add MTU configuration
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
` (78 preceding siblings ...)
2024-10-30 21:39 ` [PATCH v5 79/80] net/ntnic: add async template implementation Serhii Iliushyk
@ 2024-10-30 21:39 ` Serhii Iliushyk
79 siblings, 0 replies; 405+ messages in thread
From: Serhii Iliushyk @ 2024-10-30 21:39 UTC (permalink / raw)
To: dev
Cc: mko-plv, sil-plv, ckm, andrew.rybchenko, ferruh.yigit, stephen,
Oleksandr Kolomeiets
From: Oleksandr Kolomeiets <okl-plv@napatech.com>
Add supporting API rte_eth_dev_set_mtu
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com>
---
doc/guides/nics/features/default.ini | 2 +-
doc/guides/nics/features/ntnic.ini | 1 +
doc/guides/nics/ntnic.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 1 +
drivers/net/ntnic/include/flow_api_engine.h | 7 ++
drivers/net/ntnic/include/hw_mod_backend.h | 4 +
.../ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c | 96 +++++++++++++++++++
.../profile_inline/flow_api_profile_inline.c | 82 +++++++++++++++-
.../profile_inline/flow_api_profile_inline.h | 9 ++
.../flow_api_profile_inline_config.h | 50 ++++++++++
drivers/net/ntnic/ntnic_ethdev.c | 41 ++++++++
drivers/net/ntnic/ntnic_mod_reg.h | 5 +
12 files changed, 296 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 1e9a156a2a..a0c392f463 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -180,7 +180,7 @@ nvgre_decap =
nvgre_encap =
of_copy_ttl_in =
of_copy_ttl_out =
-of_dec_mpls_ttl =
+of_dec_mpls_ttl =MTU update = Y
of_dec_nw_ttl =
of_pop_mpls =
of_pop_vlan =
diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini
index 884365f1a0..1bf9bd76db 100644
--- a/doc/guides/nics/features/ntnic.ini
+++ b/doc/guides/nics/features/ntnic.ini
@@ -14,6 +14,7 @@ RSS hash = Y
RSS key update = Y
Basic stats = Y
Extended stats = Y
+MTU update = Y
Linux = Y
x86-64 = Y
diff --git a/doc/guides/nics/ntnic.rst b/doc/guides/nics/ntnic.rst
index fa6cd2b95c..e12553e415 100644
--- a/doc/guides/nics/ntnic.rst
+++ b/doc/guides/nics/ntnic.rst
@@ -71,6 +71,7 @@ Features
- Flow metering, including meter policy API.
- Flow update. Update of the action list for specific flow
- Asynchronous flow support
+- MTU update
Limitations
~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index 13f7dada4b..517085f0b3 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -168,6 +168,7 @@ New Features
* Added meter flow metering and flow policy support
* Added flow actions update support
* Added asynchronous flow support
+ * Added MTU update
* **Added cryptodev queue pair reset support.**
diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h
index 8604dde995..5eace2614f 100644
--- a/drivers/net/ntnic/include/flow_api_engine.h
+++ b/drivers/net/ntnic/include/flow_api_engine.h
@@ -280,6 +280,11 @@ struct nic_flow_def {
* AGE action timeout
*/
struct age_def_s age;
+
+ /*
+ * TX fragmentation IFR/RPP_LR MTU recipe
+ */
+ uint8_t flm_mtu_fragmentation_recipe;
};
enum flow_handle_type {
@@ -340,6 +345,8 @@ struct flow_handle {
uint8_t flm_qfi;
uint8_t flm_scrub_prof;
+ uint8_t flm_mtu_fragmentation_recipe;
+
/* Flow specific pointer to application template table cell stored during
* flow create.
*/
diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h
index 7a36e4c6d6..f91a3ed058 100644
--- a/drivers/net/ntnic/include/hw_mod_backend.h
+++ b/drivers/net/ntnic/include/hw_mod_backend.h
@@ -958,8 +958,12 @@ int hw_mod_tpe_rpp_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, i
uint32_t value);
int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_rpp_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
+int hw_mod_tpe_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value);
int hw_mod_tpe_ins_rcp_flush(struct flow_api_backend_s *be, int start_idx, int count);
int hw_mod_tpe_ins_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
diff --git a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
index ba8f2d0dbb..2c3ed2355b 100644
--- a/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
+++ b/drivers/net/ntnic/nthw/flow_api/hw_mod/hw_mod_tpe.c
@@ -152,6 +152,54 @@ int hw_mod_tpe_rpp_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, i
return be->iface->tpe_rpp_ifr_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_rpp_ifr_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_ifr_categories)
+ return INDEX_TOO_LARGE;
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_IFR_RCP_IPV4_EN:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv4_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV4_DF_DROP:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv4_df_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_EN:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv6_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_DROP:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].ipv6_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_MTU:
+ GET_SET(be->tpe.v3.rpp_ifr_rcp[index].mtu, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_rpp_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_rpp_ifr_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* RPP_RCP
*/
@@ -262,6 +310,54 @@ int hw_mod_tpe_ifr_rcp_flush(struct flow_api_backend_s *be, int start_idx, int c
return be->iface->tpe_ifr_rcp_flush(be->be_dev, &be->tpe, start_idx, count);
}
+static int hw_mod_tpe_ifr_rcp_mod(struct flow_api_backend_s *be, enum hw_tpe_e field,
+ uint32_t index, uint32_t *value, int get)
+{
+ if (index >= be->tpe.nb_ifr_categories)
+ return INDEX_TOO_LARGE;
+
+ switch (_VER_) {
+ case 3:
+ switch (field) {
+ case HW_TPE_IFR_RCP_IPV4_EN:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv4_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV4_DF_DROP:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv4_df_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_EN:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv6_en, value);
+ break;
+
+ case HW_TPE_IFR_RCP_IPV6_DROP:
+ GET_SET(be->tpe.v3.ifr_rcp[index].ipv6_drop, value);
+ break;
+
+ case HW_TPE_IFR_RCP_MTU:
+ GET_SET(be->tpe.v3.ifr_rcp[index].mtu, value);
+ break;
+
+ default:
+ return UNSUP_FIELD;
+ }
+
+ break;
+
+ default:
+ return UNSUP_VER;
+ }
+
+ return 0;
+}
+
+int hw_mod_tpe_ifr_rcp_set(struct flow_api_backend_s *be, enum hw_tpe_e field, int index,
+ uint32_t value)
+{
+ return hw_mod_tpe_ifr_rcp_mod(be, field, index, &value, 0);
+}
+
/*
* INS_RCP
*/
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
index 9fd943365f..a34839e00c 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c
@@ -803,6 +803,11 @@ static inline void set_key_def_sw(struct flm_flow_key_def_s *key_def, unsigned i
}
}
+static inline uint8_t convert_port_to_ifr_mtu_recipe(uint32_t port)
+{
+ return port + 1;
+}
+
static uint8_t get_port_from_port_id(const struct flow_nic_dev *ndev, uint32_t port_id)
{
struct flow_eth_dev *dev = ndev->eth_base;
@@ -1023,6 +1028,8 @@ static int flm_flow_programming(struct flow_handle *fh, uint32_t flm_op)
learn_record->rqi = fh->flm_rqi;
/* Lower 10 bits used for RPL EXT PTR */
learn_record->color = fh->flm_rpl_ext_ptr & 0x3ff;
+ /* Bit [13:10] used for MTU recipe */
+ learn_record->color |= (fh->flm_mtu_fragmentation_recipe & 0xf) << 10;
learn_record->ent = 0;
learn_record->op = flm_op & 0xf;
@@ -1121,6 +1128,9 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev,
fd->dst_id[fd->dst_num_avail].active = 1;
fd->dst_num_avail++;
+ fd->flm_mtu_fragmentation_recipe =
+ convert_port_to_ifr_mtu_recipe(port);
+
if (fd->full_offload < 0)
fd->full_offload = 1;
@@ -3070,6 +3080,8 @@ static void copy_fd_to_fh_flm(struct flow_handle *fh, const struct nic_flow_def
break;
}
}
+
+ fh->flm_mtu_fragmentation_recipe = fd->flm_mtu_fragmentation_recipe;
fh->context = fd->age.context;
}
@@ -3187,7 +3199,7 @@ static int setup_flow_flm_actions(struct flow_eth_dev *dev,
/* Setup COT */
struct hw_db_inline_cot_data cot_data = {
.matcher_color_contrib = empty_pattern ? 0x0 : 0x4, /* FT key C */
- .frag_rcp = 0,
+ .frag_rcp = empty_pattern ? fd->flm_mtu_fragmentation_recipe : 0,
};
struct hw_db_cot_idx cot_idx =
hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle, &cot_data);
@@ -3501,7 +3513,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n
/* Setup COT */
struct hw_db_inline_cot_data cot_data = {
.matcher_color_contrib = 0,
- .frag_rcp = 0,
+ .frag_rcp = fd->flm_mtu_fragmentation_recipe,
};
struct hw_db_cot_idx cot_idx =
hw_db_inline_cot_add(dev->ndev, dev->ndev->hw_db_handle,
@@ -5412,6 +5424,67 @@ int flow_get_flm_stats_profile_inline(struct flow_nic_dev *ndev, uint64_t *data,
return 0;
}
+int flow_set_mtu_inline(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu)
+{
+ if (port >= 255)
+ return -1;
+
+ uint32_t ipv4_en_frag;
+ uint32_t ipv4_action;
+ uint32_t ipv6_en_frag;
+ uint32_t ipv6_action;
+
+ if (port == 0) {
+ ipv4_en_frag = PORT_0_IPV4_FRAGMENTATION;
+ ipv4_action = PORT_0_IPV4_DF_ACTION;
+ ipv6_en_frag = PORT_0_IPV6_FRAGMENTATION;
+ ipv6_action = PORT_0_IPV6_ACTION;
+
+ } else if (port == 1) {
+ ipv4_en_frag = PORT_1_IPV4_FRAGMENTATION;
+ ipv4_action = PORT_1_IPV4_DF_ACTION;
+ ipv6_en_frag = PORT_1_IPV6_FRAGMENTATION;
+ ipv6_action = PORT_1_IPV6_ACTION;
+
+ } else {
+ ipv4_en_frag = DISABLE_FRAGMENTATION;
+ ipv4_action = IPV4_DF_DROP;
+ ipv6_en_frag = DISABLE_FRAGMENTATION;
+ ipv6_action = IPV6_DROP;
+ }
+
+ int err = 0;
+ uint8_t ifr_mtu_recipe = convert_port_to_ifr_mtu_recipe(port);
+ struct flow_nic_dev *ndev = dev->ndev;
+
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_EN, ifr_mtu_recipe,
+ ipv4_en_frag);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_EN, ifr_mtu_recipe,
+ ipv6_en_frag);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_MTU, ifr_mtu_recipe, mtu);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_DF_DROP, ifr_mtu_recipe,
+ ipv4_action);
+ err |= hw_mod_tpe_rpp_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_DROP, ifr_mtu_recipe,
+ ipv6_action);
+
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_EN, ifr_mtu_recipe,
+ ipv4_en_frag);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_EN, ifr_mtu_recipe,
+ ipv6_en_frag);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_MTU, ifr_mtu_recipe, mtu);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV4_DF_DROP, ifr_mtu_recipe,
+ ipv4_action);
+ err |= hw_mod_tpe_ifr_rcp_set(&ndev->be, HW_TPE_IFR_RCP_IPV6_DROP, ifr_mtu_recipe,
+ ipv6_action);
+
+ if (err == 0) {
+ err |= hw_mod_tpe_rpp_ifr_rcp_flush(&ndev->be, ifr_mtu_recipe, 1);
+ err |= hw_mod_tpe_ifr_rcp_flush(&ndev->be, ifr_mtu_recipe, 1);
+ }
+
+ return err;
+}
+
int flow_info_get_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
struct rte_flow_port_info *port_info,
struct rte_flow_queue_info *queue_info, struct rte_flow_error *error)
@@ -5996,6 +6069,11 @@ static const struct profile_inline_ops ops = {
.flm_free_queues = flm_free_queues,
.flm_mtr_read_stats = flm_mtr_read_stats,
.flm_update = flm_update,
+
+ /*
+ * Config API
+ */
+ .flow_set_mtu_inline = flow_set_mtu_inline,
};
void profile_inline_init(void)
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
index 0dc89085ec..ce1a0669ee 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.h
@@ -11,6 +11,10 @@
#include "flow_api.h"
#include "stream_binary_flow_api.h"
+#define DISABLE_FRAGMENTATION 0
+#define IPV4_DF_DROP 1
+#define IPV6_DROP 1
+
/*
* Management
*/
@@ -120,4 +124,9 @@ int flow_configure_profile_inline(struct flow_eth_dev *dev, uint8_t caller_id,
const struct rte_flow_queue_attr *queue_attr[],
struct rte_flow_error *error);
+/*
+ * Config API
+ */
+int flow_set_mtu_inline(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu);
+
#endif /* _FLOW_API_PROFILE_INLINE_H_ */
diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
index 3b53288ddf..c665cab16a 100644
--- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
+++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline_config.h
@@ -6,6 +6,56 @@
#ifndef _FLOW_API_PROFILE_INLINE_CONFIG_H_
#define _FLOW_API_PROFILE_INLINE_CONFIG_H_
+/*
+ * Per port configuration for IPv4 fragmentation and DF flag handling
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV4_FRAGMENTATION | IPV4_DF_ACTION || Exceeding MTU | DF flag || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - | - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_DROP || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DF_FORWARD || no | - || Forward ||
+ * || | || yes | 0 || Fragment ||
+ * || | || yes | 1 || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV4_DF_ACTION IPV4_DF_DROP
+
+#define PORT_1_IPV4_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV4_DF_ACTION IPV4_DF_DROP
+
+/*
+ * Per port configuration for IPv6 fragmentation
+ *
+ * ||-------------------------------------||-------------------------||----------||
+ * || Configuration || Egress packet type || ||
+ * ||-------------------------------------||-------------------------|| Action ||
+ * || IPV6_FRAGMENTATION | IPV6_ACTION || Exceeding MTU || ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || DISABLE | - || - || Forward ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | DROP || no || Forward ||
+ * || | || yes || Drop ||
+ * ||-------------------------------------||-------------------------||----------||
+ * || ENABLE | FRAGMENT || no || Forward ||
+ * || | || yes || Fragment ||
+ * ||-------------------------------------||-------------------------||----------||
+ */
+
+#define PORT_0_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_0_IPV6_ACTION IPV6_DROP
+
+#define PORT_1_IPV6_FRAGMENTATION DISABLE_FRAGMENTATION
+#define PORT_1_IPV6_ACTION IPV6_DROP
+
/*
* Statistics are generated each time the byte counter crosses a limit.
* If BYTE_LIMIT is zero then the byte counter does not trigger statistics
diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c
index 77436eb02d..2a2643a106 100644
--- a/drivers/net/ntnic/ntnic_ethdev.c
+++ b/drivers/net/ntnic/ntnic_ethdev.c
@@ -39,6 +39,7 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define THREAD_RETURN (0)
#define HW_MAX_PKT_LEN (10000)
#define MAX_MTU (HW_MAX_PKT_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)
+#define MIN_MTU_INLINE 512
#define EXCEPTION_PATH_HID 0
@@ -70,6 +71,8 @@ const rte_thread_attr_t thread_attr = { .priority = RTE_THREAD_PRIORITY_NORMAL }
#define MAX_RX_PACKETS 128
#define MAX_TX_PACKETS 128
+#define MTUINITVAL 1500
+
uint64_t rte_tsc_freq;
static void (*previous_handler)(int sig);
@@ -338,6 +341,7 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info
dev_info->max_mtu = MAX_MTU;
if (p_adapter_info->fpga_info.profile == FPGA_INFO_PROFILE_INLINE) {
+ dev_info->min_mtu = MIN_MTU_INLINE;
dev_info->flow_type_rss_offloads = NT_ETH_RSS_OFFLOAD_MASK;
dev_info->hash_key_size = MAX_RSS_KEY_LEN;
@@ -1149,6 +1153,26 @@ static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev,
return 0;
}
+static int dev_set_mtu_inline(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ const struct profile_inline_ops *profile_inline_ops = get_profile_inline_ops();
+
+ if (profile_inline_ops == NULL) {
+ NT_LOG_DBGX(ERR, NTNIC, "profile_inline module uninitialized");
+ return -1;
+ }
+
+ struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private;
+
+ struct flow_eth_dev *flw_dev = internals->flw_dev;
+ int ret = -1;
+
+ if (internals->type == PORT_TYPE_PHYSICAL && mtu >= MIN_MTU_INLINE && mtu <= MAX_MTU)
+ ret = profile_inline_ops->flow_set_mtu_inline(flw_dev, internals->port, mtu);
+
+ return ret ? -EINVAL : 0;
+}
+
static int eth_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -1714,6 +1738,7 @@ static struct eth_dev_ops nthw_eth_dev_ops = {
.xstats_reset = eth_xstats_reset,
.xstats_get_by_id = eth_xstats_get_by_id,
.xstats_get_names_by_id = eth_xstats_get_names_by_id,
+ .mtu_set = NULL,
.promiscuous_enable = promiscuous_enable,
.rss_hash_update = eth_dev_rss_hash_update,
.rss_hash_conf_get = rss_hash_conf_get,
@@ -2277,6 +2302,7 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
internals->pci_dev = pci_dev;
internals->n_intf_no = n_intf_no;
internals->type = PORT_TYPE_PHYSICAL;
+ internals->port = n_intf_no;
internals->nb_rx_queues = nb_rx_queues;
internals->nb_tx_queues = nb_tx_queues;
@@ -2386,6 +2412,21 @@ nthw_pci_dev_init(struct rte_pci_device *pci_dev)
/* increase initialized ethernet devices - PF */
p_drv->n_eth_dev_init_count++;
+ if (get_flow_filter_ops() != NULL) {
+ if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE &&
+ internals->flw_dev->ndev->be.tpe.ver >= 2) {
+ assert(nthw_eth_dev_ops.mtu_set == dev_set_mtu_inline ||
+ nthw_eth_dev_ops.mtu_set == NULL);
+ nthw_eth_dev_ops.mtu_set = dev_set_mtu_inline;
+ dev_set_mtu_inline(eth_dev, MTUINITVAL);
+ NT_LOG_DBGX(DBG, NTNIC, "INLINE MTU supported, tpe version %d",
+ internals->flw_dev->ndev->be.tpe.ver);
+
+ } else {
+ NT_LOG(DBG, NTNIC, "INLINE MTU not supported");
+ }
+ }
+
/* Port event thread */
if (fpga_info->profile == FPGA_INFO_PROFILE_INLINE) {
res = THREAD_CTRL_CREATE(&p_nt_drv->port_event_thread, "nt_port_event_thr",
diff --git a/drivers/net/ntnic/ntnic_mod_reg.h b/drivers/net/ntnic/ntnic_mod_reg.h
index eb764356eb..71861c6dea 100644
--- a/drivers/net/ntnic/ntnic_mod_reg.h
+++ b/drivers/net/ntnic/ntnic_mod_reg.h
@@ -408,6 +408,11 @@ struct profile_inline_ops {
const struct rte_flow_port_attr *port_attr, uint16_t nb_queue,
const struct rte_flow_queue_attr *queue_attr[],
struct rte_flow_error *error);
+
+ /*
+ * Config API
+ */
+ int (*flow_set_mtu_inline)(struct flow_eth_dev *dev, uint32_t port, uint16_t mtu);
};
void register_profile_inline_ops(const struct profile_inline_ops *ops);
--
2.45.0
^ permalink raw reply [flat|nested] 405+ messages in thread
end of thread, other threads:[~2024-10-30 23:15 UTC | newest]
Thread overview: 405+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-21 21:04 [PATCH v1 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 02/73] net/ntnic: add flow filter API Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 10/73] net/ntnic: add action queue Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 11/73] net/ntnic: add action mark Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 12/73] net/ntnic: add ation jump Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 13/73] net/ntnic: add action drop Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 14/73] net/ntnic: add item eth Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 16/73] net/ntnic: add item ICMP Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 17/73] net/ntnic: add item port ID Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 18/73] net/ntnic: add item void Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 19/73] net/ntnic: add item UDP Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 20/73] net/ntnic: add action TCP Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 21/73] net/ntnic: add action VLAN Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 22/73] net/ntnic: add item SCTP Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 24/73] net/ntnic: add action modify filed Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 26/73] net/ntnic: add cat module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 28/73] net/ntnic: add PDB module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 29/73] net/ntnic: add QSL module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 30/73] net/ntnic: add KM module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 31/73] net/ntnic: add hash API Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 32/73] net/ntnic: add TPE module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 33/73] net/ntnic: add FLM module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
2024-10-21 23:10 ` Stephen Hemminger
2024-10-21 21:04 ` [PATCH v1 38/73] net/ntnic: add flow flush Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 43/73] net/ntnic: add HFU module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 44/73] net/ntnic: add IFR module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
2024-10-21 23:12 ` Stephen Hemminger
2024-10-21 21:04 ` [PATCH v1 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 54/73] net/ntnic: add statistics API Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 55/73] net/ntnic: add rpf module Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 56/73] net/ntnic: add statistics poll Serhii Iliushyk
2024-10-21 21:04 ` [PATCH v1 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 58/73] net/ntnic: add tsm module Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 59/73] net/ntnic: add STA module Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 60/73] net/ntnic: add TSM module Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 61/73] net/ntnic: add xstats Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 62/73] net/ntnic: added flow statistics Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 63/73] net/ntnic: add scrub registers Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 64/73] net/ntnic: update documentation Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 68/73] net/ntnic: add aged flow event Serhii Iliushyk
2024-10-21 23:22 ` Stephen Hemminger
2024-10-21 21:05 ` [PATCH v1 69/73] net/ntnic: add thread termination Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 70/73] net/ntnic: add age documentation Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 71/73] net/ntnic: add meter API Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 72/73] net/ntnic: add meter module Serhii Iliushyk
2024-10-21 21:05 ` [PATCH v1 73/73] net/ntnic: add meter documentation Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 00/73] Provide flow filter API and statistics Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 02/73] net/ntnic: add flow filter API Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
2024-10-22 17:17 ` Stephen Hemminger
2024-10-22 16:54 ` [PATCH v2 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
2024-10-22 17:20 ` Stephen Hemminger
2024-10-23 16:09 ` Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 10/73] net/ntnic: add action queue Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 11/73] net/ntnic: add action mark Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 12/73] net/ntnic: add ation jump Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 13/73] net/ntnic: add action drop Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 14/73] net/ntnic: add item eth Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 16/73] net/ntnic: add item ICMP Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 17/73] net/ntnic: add item port ID Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 18/73] net/ntnic: add item void Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 19/73] net/ntnic: add item UDP Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 20/73] net/ntnic: add action TCP Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 21/73] net/ntnic: add action VLAN Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 22/73] net/ntnic: add item SCTP Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 24/73] net/ntnic: add action modify filed Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 26/73] net/ntnic: add cat module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 28/73] net/ntnic: add PDB module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 29/73] net/ntnic: add QSL module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 30/73] net/ntnic: add KM module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 31/73] net/ntnic: add hash API Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 32/73] net/ntnic: add TPE module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 33/73] net/ntnic: add FLM module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 38/73] net/ntnic: add flow flush Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
2024-10-22 16:54 ` [PATCH v2 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 43/73] net/ntnic: add HFU module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 44/73] net/ntnic: add IFR module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 54/73] net/ntnic: add statistics API Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 55/73] net/ntnic: add rpf module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 56/73] net/ntnic: add statistics poll Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 58/73] net/ntnic: add tsm module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 59/73] net/ntnic: add STA module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 60/73] net/ntnic: add TSM module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 61/73] net/ntnic: add xstats Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 62/73] net/ntnic: added flow statistics Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 63/73] net/ntnic: add scrub registers Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 64/73] net/ntnic: update documentation Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 65/73] net/ntnic: added flow aged APIs Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 66/73] net/ntnic: add aged API to the inline profile Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 67/73] net/ntnic: add info and configure flow API Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 68/73] net/ntnic: add aged flow event Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 69/73] net/ntnic: add thread termination Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 70/73] net/ntnic: add age documentation Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 71/73] net/ntnic: add meter API Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 72/73] net/ntnic: add meter module Serhii Iliushyk
2024-10-22 16:55 ` [PATCH v2 73/73] net/ntnic: add meter documentation Serhii Iliushyk
2024-10-22 17:11 ` [PATCH v2 00/73] Provide flow filter API and statistics Stephen Hemminger
2024-10-23 16:59 ` [PATCH v3 " Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 01/73] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 02/73] net/ntnic: add flow filter API Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 03/73] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 04/73] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 05/73] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 06/73] net/ntnic: add management API for NT flow profile Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 07/73] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 08/73] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 09/73] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 10/73] net/ntnic: add action queue Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 11/73] net/ntnic: add action mark Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 12/73] net/ntnic: add ation jump Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 13/73] net/ntnic: add action drop Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 14/73] net/ntnic: add item eth Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 15/73] net/ntnic: add item IPv4 Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 16/73] net/ntnic: add item ICMP Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 17/73] net/ntnic: add item port ID Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 18/73] net/ntnic: add item void Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 19/73] net/ntnic: add item UDP Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 20/73] net/ntnic: add action TCP Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 21/73] net/ntnic: add action VLAN Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 22/73] net/ntnic: add item SCTP Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 23/73] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 24/73] net/ntnic: add action modify filed Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 25/73] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 26/73] net/ntnic: add cat module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 27/73] net/ntnic: add SLC LR module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 28/73] net/ntnic: add PDB module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 29/73] net/ntnic: add QSL module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 30/73] net/ntnic: add KM module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 31/73] net/ntnic: add hash API Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 32/73] net/ntnic: add TPE module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 33/73] net/ntnic: add FLM module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 34/73] net/ntnic: add flm rcp module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 35/73] net/ntnic: add learn flow queue handling Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 36/73] net/ntnic: match and action db attributes were added Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 37/73] net/ntnic: add flow dump feature Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 38/73] net/ntnic: add flow flush Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 39/73] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 40/73] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 41/73] net/ntnic: add MOD CSU Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 42/73] net/ntnic: add MOD FLM Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 43/73] net/ntnic: add HFU module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 44/73] net/ntnic: add IFR module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 45/73] net/ntnic: add MAC Rx module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 46/73] net/ntnic: add MAC Tx module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 47/73] net/ntnic: add RPP LR module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 48/73] net/ntnic: add MOD SLC LR Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 49/73] net/ntnic: add Tx CPY module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 50/73] net/ntnic: add Tx INS module Serhii Iliushyk
2024-10-23 16:59 ` [PATCH v3 51/73] net/ntnic: add Tx RPL module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 52/73] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 53/73] net/ntnic: enable RSS feature Serhii Iliushyk
2024-10-28 16:15 ` Stephen Hemminger
2024-10-23 17:00 ` [PATCH v3 54/73] net/ntnic: add statistics API Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 55/73] net/ntnic: add rpf module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 56/73] net/ntnic: add statistics poll Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 57/73] net/ntnic: added flm stat interface Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 58/73] net/ntnic: add tsm module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 59/73] net/ntnic: add STA module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 60/73] net/ntnic: add TSM module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 61/73] net/ntnic: add xstats Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 62/73] net/ntnic: added flow statistics Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 63/73] net/ntnic: add scrub registers Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 64/73] net/ntnic: update documentation Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 65/73] net/ntnic: add flow aging API Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 66/73] net/ntnic: add aging API to the inline profile Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 67/73] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 68/73] net/ntnic: add flow aging event Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 69/73] net/ntnic: add termination thread Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 70/73] net/ntnic: add aging documentation Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 71/73] net/ntnic: add meter API Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 72/73] net/ntnic: add meter module Serhii Iliushyk
2024-10-23 17:00 ` [PATCH v3 73/73] net/ntnic: update meter documentation Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 00/86] Provide flow filter API and statistics Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 01/86] net/ntnic: add API for configuration NT flow dev Serhii Iliushyk
2024-10-30 1:54 ` Ferruh Yigit
2024-10-29 16:41 ` [PATCH v4 02/86] net/ntnic: add flow filter API Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 03/86] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 04/86] net/ntnic: add internal flow create/destroy API Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 05/86] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
2024-10-30 1:56 ` Ferruh Yigit
2024-10-30 21:08 ` Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 06/86] net/ntnic: add management API for NT flow profile Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 07/86] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 08/86] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 09/86] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 10/86] net/ntnic: add action queue Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 11/86] net/ntnic: add action mark Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 12/86] net/ntnic: add ation jump Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 13/86] net/ntnic: add action drop Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 14/86] net/ntnic: add item eth Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 15/86] net/ntnic: add item IPv4 Serhii Iliushyk
2024-10-30 1:55 ` Ferruh Yigit
2024-10-29 16:41 ` [PATCH v4 16/86] net/ntnic: add item ICMP Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 17/86] net/ntnic: add item port ID Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 18/86] net/ntnic: add item void Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 19/86] net/ntnic: add item UDP Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 20/86] net/ntnic: add action TCP Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 21/86] net/ntnic: add action VLAN Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 22/86] net/ntnic: add item SCTP Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 23/86] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 24/86] net/ntnic: add action modify filed Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 25/86] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 26/86] net/ntnic: add cat module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 27/86] net/ntnic: add SLC LR module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 28/86] net/ntnic: add PDB module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 29/86] net/ntnic: add QSL module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 30/86] net/ntnic: add KM module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 31/86] net/ntnic: add hash API Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 32/86] net/ntnic: add TPE module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 33/86] net/ntnic: add FLM module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 34/86] net/ntnic: add flm rcp module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 35/86] net/ntnic: add learn flow queue handling Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 36/86] net/ntnic: match and action db attributes were added Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 37/86] net/ntnic: add flow dump feature Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 38/86] net/ntnic: add flow flush Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 39/86] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 40/86] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 41/86] net/ntnic: add CSU module registers Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 42/86] net/ntnic: add FLM " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 43/86] net/ntnic: add HFU " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 44/86] net/ntnic: add IFR " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 45/86] net/ntnic: add MAC Rx " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 46/86] net/ntnic: add MAC Tx " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 47/86] net/ntnic: add RPP LR " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 48/86] net/ntnic: add SLC " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 49/86] net/ntnic: add Tx CPY " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 50/86] net/ntnic: add Tx INS " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 51/86] net/ntnic: add Tx RPL " Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 52/86] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 53/86] net/ntnic: enable RSS feature Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 54/86] net/ntnic: add statistics API Serhii Iliushyk
2024-10-29 16:41 ` [PATCH v4 55/86] net/ntnic: add rpf module Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 56/86] net/ntnic: add statistics poll Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 57/86] net/ntnic: added flm stat interface Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 58/86] net/ntnic: add tsm module Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 59/86] net/ntnic: add STA module Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 60/86] net/ntnic: add TSM module Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 61/86] net/ntnic: add xstats Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 62/86] net/ntnic: added flow statistics Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 63/86] net/ntnic: add scrub registers Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 64/86] net/ntnic: update documentation Serhii Iliushyk
2024-10-30 1:55 ` Ferruh Yigit
2024-10-29 16:42 ` [PATCH v4 65/86] net/ntnic: add flow aging API Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 66/86] net/ntnic: add aging API to the inline profile Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 67/86] net/ntnic: add flow info and flow configure APIs Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 68/86] net/ntnic: add flow aging event Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 69/86] net/ntnic: add termination thread Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 70/86] net/ntnic: add aging documentation Serhii Iliushyk
2024-10-30 1:56 ` Ferruh Yigit
2024-10-29 16:42 ` [PATCH v4 71/86] net/ntnic: add meter API Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 72/86] net/ntnic: add meter module Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 73/86] net/ntnic: update meter documentation Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 74/86] net/ntnic: add action update Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 75/86] net/ntnic: add flow " Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 76/86] net/ntnic: flow update was added Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 77/86] net/ntnic: update documentation for flow actions update Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 78/86] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 79/86] net/ntnic: remove unnecessary type cast Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 80/86] net/ntnic: add async create/destroy API declaration Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 81/86] net/ntnic: add async template " Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 82/86] net/ntnic: add async flow create/delete API implementation Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 83/86] net/ntnic: add async template APIs implementation Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 84/86] net/ntnic: update async flow API documentation Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 85/86] net/ntnic: add MTU configuration Serhii Iliushyk
2024-10-29 16:42 ` [PATCH v4 86/86] net/ntnic: update documentation for set MTU Serhii Iliushyk
2024-10-30 2:01 ` [PATCH v4 00/86] Provide flow filter API and statistics Ferruh Yigit
2024-10-30 21:38 ` [PATCH v5 00/80] Provide flow filter and statistics support Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 01/80] net/ntnic: add NT flow dev configuration Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 02/80] net/ntnic: add flow filter support Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 03/80] net/ntnic: add minimal create/destroy flow operations Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 04/80] net/ntnic: add internal functions for create/destroy Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 05/80] net/ntnic: add minimal NT flow inline profile Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 06/80] net/ntnic: add management functions for NT flow profile Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 07/80] net/ntnic: add NT flow profile management implementation Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 08/80] net/ntnic: add create/destroy implementation for NT flows Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 09/80] net/ntnic: add infrastructure for for flow actions and items Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 10/80] net/ntnic: add action queue Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 11/80] net/ntnic: add action mark Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 12/80] net/ntnic: add ation jump Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 13/80] net/ntnic: add action drop Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 14/80] net/ntnic: add item eth Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 15/80] net/ntnic: add item IPv4 Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 16/80] net/ntnic: add item ICMP Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 17/80] net/ntnic: add item port ID Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 18/80] net/ntnic: add item void Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 19/80] net/ntnic: add item UDP Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 20/80] net/ntnic: add action TCP Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 21/80] net/ntnic: add action VLAN Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 22/80] net/ntnic: add item SCTP Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 23/80] net/ntnic: add items IPv6 and ICMPv6 Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 24/80] net/ntnic: add action modify filed Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 25/80] net/ntnic: add items gtp and actions raw encap/decap Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 26/80] net/ntnic: add cat module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 27/80] net/ntnic: add SLC LR module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 28/80] net/ntnic: add PDB module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 29/80] net/ntnic: add QSL module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 30/80] net/ntnic: add KM module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 31/80] net/ntnic: add hash API Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 32/80] net/ntnic: add TPE module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 33/80] net/ntnic: add FLM module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 34/80] net/ntnic: add FLM RCP module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 35/80] net/ntnic: add learn flow queue handling Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 36/80] net/ntnic: match and action db attributes were added Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 37/80] net/ntnic: add flow dump feature Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 38/80] net/ntnic: add flow flush Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 39/80] net/ntnic: add GMF (Generic MAC Feeder) module Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 40/80] net/ntnic: sort FPGA registers alphanumerically Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 41/80] net/ntnic: add CSU module registers Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 42/80] net/ntnic: add FLM " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 43/80] net/ntnic: add HFU " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 44/80] net/ntnic: add IFR " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 45/80] net/ntnic: add MAC Rx " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 46/80] net/ntnic: add MAC Tx " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 47/80] net/ntnic: add RPP LR " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 48/80] net/ntnic: add SLC " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 49/80] net/ntnic: add Tx CPY " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 50/80] net/ntnic: add Tx INS " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 51/80] net/ntnic: add Tx RPL " Serhii Iliushyk
2024-10-30 21:38 ` [PATCH v5 52/80] net/ntnic: update alignment for virt queue structs Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 53/80] net/ntnic: enable RSS feature Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 54/80] net/ntnic: add statistics support Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 55/80] net/ntnic: add rpf module Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 56/80] net/ntnic: add statistics poll Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 57/80] net/ntnic: added flm stat interface Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 58/80] net/ntnic: add TSM module Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 59/80] net/ntnic: add STA module Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 60/80] net/ntnic: add TSM module Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 61/80] net/ntnic: add xStats Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 62/80] net/ntnic: added flow statistics Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 63/80] net/ntnic: add scrub registers Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 64/80] net/ntnic: add high-level flow aging support Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 65/80] net/ntnic: add aging to the inline profile Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 66/80] net/ntnic: add flow info and flow configure support Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 67/80] net/ntnic: add flow aging event Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 68/80] net/ntnic: add termination thread Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 69/80] net/ntnic: add meter support Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 70/80] net/ntnic: add meter module Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 71/80] net/ntnic: add action update support Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 72/80] net/ntnic: add flow action update Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 73/80] net/ntnic: add flow actions update Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 74/80] net/ntnic: migrate to the RTE spinlock Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 75/80] net/ntnic: remove unnecessary Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 76/80] net/ntnic: add async create/destroy declaration Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 77/80] net/ntnic: add async template declaration Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 78/80] net/ntnic: add async flow create/delete implementation Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 79/80] net/ntnic: add async template implementation Serhii Iliushyk
2024-10-30 21:39 ` [PATCH v5 80/80] net/ntnic: add MTU configuration Serhii Iliushyk
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).