From: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
To: dev@dpdk.org
Cc: Kishore Padmanabha <kishore.padmanabha@broadcom.com>,
Shuanglin Wang <shuanglin.wang@broadcom.com>,
Michael Baucom <michael.baucom@broadcom.com>,
Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Subject: [PATCH 45/47] net/bnxt: tf_ulp: support a few feature extensions
Date: Fri, 30 Aug 2024 19:30:47 +0530 [thread overview]
Message-ID: <20240830140049.1715230-46-sriharsha.basavapatna@broadcom.com> (raw)
In-Reply-To: <20240830140049.1715230-1-sriharsha.basavapatna@broadcom.com>
From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
This patch supports the following features.
add support for port table write operation
Added support for port table write operation from the
template so that template can write mirror id details into
the port database.
support generic template for socket direct
Support the socket direct feature, which is disabled by
default. User could enable it with meson configuration
parameter truflow feature bit.
add support for truflow promiscuous mode
The truflow application supports promiscuous mode to enable
or disable receiving the packets with unknown destination
mac addresses.
set metadata for profile tcam entry
The metadata higher bits are currently used for profile
tcam entry. To make better use of EM entries, it is better
to use metadata fully instead of only the higher bits of
the metadata.
support the group miss action
Generic template supports the feature of setting group miss
action with the following rte command:
flow group 0 group_id 1 ingress set_miss_actions jump
group 3 / end
fix some build failures
This change resolves a build issue seen on some OS's and
compiler versions.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Shuanglin Wang <shuanglin.wang@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
---
drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 31 +++
drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 27 ++-
drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c | 4 +
drivers/net/bnxt/tf_ulp/ulp_def_rules.c | 286 ++++++++++++++++++++++-
drivers/net/bnxt/tf_ulp/ulp_mapper.c | 43 +++-
drivers/net/bnxt/tf_ulp/ulp_port_db.c | 89 +++++++
drivers/net/bnxt/tf_ulp/ulp_port_db.h | 28 +++
drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 17 ++
8 files changed, 520 insertions(+), 5 deletions(-)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index 758b9deb63..a35f79f167 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -92,9 +92,19 @@ enum bnxt_rte_flow_action_type {
BNXT_RTE_FLOW_ACTION_TYPE_LAST
};
+#define BNXT_ULP_MAX_GROUP_CNT 8
+struct bnxt_ulp_grp_rule_info {
+ uint32_t group_id;
+ uint32_t flow_id;
+ uint8_t dir;
+ uint8_t valid;
+};
+
struct bnxt_ulp_df_rule_info {
uint32_t def_port_flow_id;
+ uint32_t promisc_flow_id;
uint8_t valid;
+ struct bnxt_ulp_grp_rule_info grp_df_rule[BNXT_ULP_MAX_GROUP_CNT];
};
struct bnxt_ulp_vfr_rule_info {
@@ -291,4 +301,25 @@ bnxt_ulp_cntxt_entry_acquire(void *arg);
void
bnxt_ulp_cntxt_entry_release(void);
+int32_t
+bnxt_ulp_promisc_mode_set(struct bnxt *bp, uint8_t enable);
+
+int32_t
+bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params,
+ const struct rte_flow_attr *attr);
+
+void
+bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params,
+ const struct rte_flow_attr *attr);
+
+void
+bnxt_ulp_init_parser_cf_defaults(struct ulp_rte_parser_params *params,
+ uint16_t port_id);
+
+int32_t
+bnxt_ulp_grp_miss_act_set(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ uint32_t *flow_id);
+
#endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index eea05e129a..334eda99ce 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -66,7 +66,7 @@ bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
return BNXT_TF_RC_SUCCESS;
}
-static inline void
+void
bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params,
const struct rte_flow_attr *attr)
{
@@ -86,7 +86,7 @@ bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params,
}
}
-static int32_t
+int32_t
bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params,
const struct rte_flow_attr *attr)
{
@@ -117,7 +117,7 @@ bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params,
return 0;
}
-static inline void
+void
bnxt_ulp_init_parser_cf_defaults(struct ulp_rte_parser_params *params,
uint16_t port_id)
{
@@ -268,6 +268,26 @@ bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_parms *mparms,
ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SOCKET_DIRECT_VPORT,
(vport == 1) ? 2 : 1);
}
+
+ /* Update the socket direct svif when socket_direct feature enabled. */
+ if (ULP_BITMAP_ISSET(bnxt_ulp_feature_bits_get(params->ulp_ctx),
+ BNXT_ULP_FEATURE_BIT_SOCKET_DIRECT)) {
+ enum bnxt_ulp_intf_type intf_type;
+ /* For ingress flow on trusted_vf port */
+ intf_type = bnxt_pmd_get_interface_type(params->port_id);
+ if (intf_type == BNXT_ULP_INTF_TYPE_TRUSTED_VF) {
+ uint16_t svif;
+ /* Get the socket direct svif of the given dev port */
+ if (unlikely(ulp_port_db_dev_port_socket_direct_svif_get(params->ulp_ctx,
+ params->port_id,
+ &svif))) {
+ BNXT_DRV_DBG(ERR, "Invalid port id %u\n",
+ params->port_id);
+ return;
+ }
+ ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SOCKET_DIRECT_SVIF, svif);
+ }
+ }
}
/* Function to create the rte flow. */
@@ -305,6 +325,7 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
/* Initialize the parser params */
memset(¶ms, 0, sizeof(struct ulp_rte_parser_params));
params.ulp_ctx = ulp_ctx;
+ params.port_id = dev->data->port_id;
if (unlikely(bnxt_ulp_cntxt_app_id_get(params.ulp_ctx, ¶ms.app_id))) {
BNXT_DRV_DBG(ERR, "failed to get the app id\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c
index 6401a7a80f..f09d072ef3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c
@@ -181,6 +181,8 @@ ulp_allocator_tbl_list_alloc(struct bnxt_ulp_mapper_data *mapper_data,
BNXT_DRV_DBG(ERR, "unable to alloc index %x\n", idx);
return -ENOMEM;
}
+ /* Not using zero index */
+ *alloc_id += 1;
return 0;
}
@@ -210,6 +212,8 @@ ulp_allocator_tbl_list_free(struct bnxt_ulp_mapper_data *mapper_data,
BNXT_DRV_DBG(ERR, "invalid table index %x\n", idx);
return -EINVAL;
}
+ /* not using zero index */
+ index -= 1;
if (index < 0 || index > entry->num_entries) {
BNXT_DRV_DBG(ERR, "invalid alloc index %x\n", index);
return -EINVAL;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
index 17d2daeea3..b7a893a04f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -12,7 +12,7 @@
#include "ulp_port_db.h"
#include "ulp_flow_db.h"
#include "ulp_mapper.h"
-
+#include "ulp_rte_parser.h"
static void
ulp_l2_custom_tunnel_id_update(struct bnxt *bp,
struct bnxt_ulp_mapper_parms *params);
@@ -485,6 +485,24 @@ ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id)
return rc;
}
+static void
+bnxt_ulp_destroy_group_rules(struct bnxt *bp, uint16_t port_id)
+{
+ struct bnxt_ulp_grp_rule_info *info;
+ struct bnxt_ulp_grp_rule_info *grp_rules;
+ uint16_t idx;
+
+ grp_rules = bp->ulp_ctx->cfg_data->df_rule_info[port_id].grp_df_rule;
+
+ for (idx = 0; idx < BNXT_ULP_MAX_GROUP_CNT; idx++) {
+ info = &grp_rules[idx];
+ if (!info->valid)
+ continue;
+ ulp_default_flow_destroy(bp->eth_dev, info->flow_id);
+ memset(info, 0, sizeof(struct bnxt_ulp_grp_rule_info));
+ }
+}
+
void
bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global)
{
@@ -505,8 +523,14 @@ bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global)
if (!info->valid)
return;
+ /* Delete the group default rules */
+ bnxt_ulp_destroy_group_rules(bp, port_id);
+
ulp_default_flow_destroy(bp->eth_dev,
info->def_port_flow_id);
+ if (info->promisc_flow_id)
+ ulp_default_flow_destroy(bp->eth_dev,
+ info->promisc_flow_id);
memset(info, 0, sizeof(struct bnxt_ulp_df_rule_info));
return;
}
@@ -517,8 +541,14 @@ bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global)
if (!info->valid)
continue;
+ /* Delete the group default rules */
+ bnxt_ulp_destroy_group_rules(bp, port_id);
+
ulp_default_flow_destroy(bp->eth_dev,
info->def_port_flow_id);
+ if (info->promisc_flow_id)
+ ulp_default_flow_destroy(bp->eth_dev,
+ info->promisc_flow_id);
memset(info, 0, sizeof(struct bnxt_ulp_df_rule_info));
}
}
@@ -552,6 +582,7 @@ bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type,
int32_t
bnxt_ulp_create_df_rules(struct bnxt *bp)
{
+ struct rte_eth_dev *dev = bp->eth_dev;
struct bnxt_ulp_df_rule_info *info;
uint16_t port_id;
int rc = 0;
@@ -581,6 +612,9 @@ bnxt_ulp_create_df_rules(struct bnxt *bp)
if (rc || BNXT_TESTPMD_EN(bp))
bp->tx_cfa_action = 0;
+ /* set or reset the promiscuous rule */
+ bnxt_ulp_promisc_mode_set(bp, dev->data->promiscuous);
+
info->valid = true;
return 0;
}
@@ -709,3 +743,253 @@ ulp_l2_custom_tunnel_id_update(struct bnxt *bp,
ULP_WP_SYM_TUN_HDR_TYPE_UPAR2);
}
}
+
+/*
+ * Function to execute a specific template, this does not create flow id
+ *
+ * bp [in] Ptr to bnxt
+ * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id).
+ * ulp_class_tid [in] Class template ID number.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+static int32_t
+ulp_flow_template_process(struct bnxt *bp,
+ struct ulp_tlv_param *param_list,
+ uint32_t ulp_class_tid,
+ uint16_t port_id,
+ uint32_t flow_id)
+{
+ struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+ uint64_t comp_fld[BNXT_ULP_CF_IDX_LAST];
+ struct bnxt_ulp_mapper_parms mapper_params = { 0 };
+ struct ulp_rte_act_prop act_prop;
+ struct ulp_rte_act_bitmap act = { 0 };
+ struct bnxt_ulp_context *ulp_ctx;
+ uint32_t type;
+ int rc = 0;
+
+ memset(&mapper_params, 0, sizeof(mapper_params));
+ memset(hdr_field, 0, sizeof(hdr_field));
+ memset(comp_fld, 0, sizeof(comp_fld));
+ memset(&act_prop, 0, sizeof(act_prop));
+
+ mapper_params.hdr_field = hdr_field;
+ mapper_params.act_bitmap = &act;
+ mapper_params.act_prop = &act_prop;
+ mapper_params.comp_fld = comp_fld;
+ mapper_params.class_tid = ulp_class_tid;
+ mapper_params.port_id = port_id;
+
+ ulp_ctx = bp->ulp_ctx;
+ if (!ulp_ctx) {
+ BNXT_DRV_DBG(ERR,
+ "ULP is not init'ed. Fail to create dflt flow.\n");
+ return -EINVAL;
+ }
+
+ type = param_list->type;
+ while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) {
+ if (ulp_def_handler_tbl[type].vfr_func) {
+ rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx,
+ param_list,
+ &mapper_params);
+ if (rc) {
+ BNXT_DRV_DBG(ERR,
+ "Failed to create default flow\n");
+ return rc;
+ }
+ }
+
+ param_list++;
+ type = param_list->type;
+ }
+ /* Protect flow creation */
+ if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) {
+ BNXT_DRV_DBG(ERR, "Flow db lock acquire failed\n");
+ return -EINVAL;
+ }
+
+ mapper_params.flow_id = flow_id;
+ rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params,
+ NULL);
+ bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx);
+ return rc;
+}
+
+int32_t
+bnxt_ulp_promisc_mode_set(struct bnxt *bp, uint8_t enable)
+{
+ uint32_t flow_type;
+ struct bnxt_ulp_df_rule_info *info;
+ uint16_t port_id;
+ int rc = 0;
+
+ if (!BNXT_TRUFLOW_EN(bp) || BNXT_ETH_DEV_IS_REPRESENTOR(bp->eth_dev) ||
+ !bp->ulp_ctx)
+ return rc;
+
+ if (!BNXT_CHIP_P5(bp))
+ return rc;
+
+ port_id = bp->eth_dev->data->port_id;
+ info = &bp->ulp_ctx->cfg_data->df_rule_info[port_id];
+
+ /* create the promiscuous rule */
+ if (enable && !info->promisc_flow_id) {
+ flow_type = BNXT_ULP_TEMPLATE_PROMISCUOUS_ENABLE;
+ rc = bnxt_create_port_app_df_rule(bp, flow_type,
+ &info->promisc_flow_id);
+ BNXT_DRV_DBG(DEBUG, "enable ulp promisc mode on port %u:%u\n",
+ port_id, info->promisc_flow_id);
+ } else if (!enable && info->promisc_flow_id) {
+ struct ulp_tlv_param param_list[] = {
+ {
+ .type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID,
+ .length = 2,
+ .value = {(port_id >> 8) & 0xff, port_id & 0xff}
+ },
+ {
+ .type = BNXT_ULP_DF_PARAM_TYPE_LAST,
+ .length = 0,
+ .value = {0}
+ }
+ };
+
+ flow_type = BNXT_ULP_TEMPLATE_PROMISCUOUS_DISABLE;
+ if (ulp_flow_template_process(bp, param_list, flow_type,
+ port_id, 0))
+ return -EIO;
+
+ rc = ulp_default_flow_destroy(bp->eth_dev,
+ info->promisc_flow_id);
+ BNXT_DRV_DBG(DEBUG, "disable ulp promisc mode on port %u:%u\n",
+ port_id, info->promisc_flow_id);
+ info->promisc_flow_id = 0;
+ }
+ return rc;
+}
+
+/* Function to create the rte flow for miss action. */
+int32_t
+bnxt_ulp_grp_miss_act_set(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action actions[],
+ uint32_t *flow_id)
+{
+ struct bnxt_ulp_mapper_parms mparms = { 0 };
+ struct ulp_rte_parser_params params;
+ struct bnxt_ulp_context *ulp_ctx;
+ int ret = BNXT_TF_RC_ERROR;
+ uint16_t func_id;
+ uint32_t fid;
+ uint32_t group_id;
+
+ ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+ if (unlikely(!ulp_ctx)) {
+ BNXT_DRV_DBG(ERR, "ULP context is not initialized\n");
+ goto flow_error;
+ }
+
+ /* Initialize the parser params */
+ memset(¶ms, 0, sizeof(struct ulp_rte_parser_params));
+ params.ulp_ctx = ulp_ctx;
+ params.port_id = dev->data->port_id;
+ /* classid is the group action template*/
+ params.class_id = BNXT_ULP_TEMPLATE_GROUP_MISS_ACTION;
+
+ if (unlikely(bnxt_ulp_cntxt_app_id_get(params.ulp_ctx, ¶ms.app_id))) {
+ BNXT_DRV_DBG(ERR, "failed to get the app id\n");
+ goto flow_error;
+ }
+
+ /* Set the flow attributes */
+ bnxt_ulp_set_dir_attributes(¶ms, attr);
+
+ if (unlikely(bnxt_ulp_set_prio_attribute(¶ms, attr)))
+ goto flow_error;
+
+ bnxt_ulp_init_parser_cf_defaults(¶ms, params.port_id);
+
+ /* Get the function id */
+ if (unlikely(ulp_port_db_port_func_id_get(ulp_ctx,
+ params.port_id,
+ &func_id))) {
+ BNXT_DRV_DBG(ERR, "conversion of port to func id failed\n");
+ goto flow_error;
+ }
+
+ /* Protect flow creation */
+ if (unlikely(bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx))) {
+ BNXT_DRV_DBG(ERR, "Flow db lock acquire failed\n");
+ goto flow_error;
+ }
+
+ /* Allocate a Flow ID for attaching all resources for the flow to.
+ * Once allocated, all errors have to walk the list of resources and
+ * free each of them.
+ */
+ ret = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT,
+ func_id, &fid);
+ if (unlikely(ret)) {
+ BNXT_DRV_DBG(ERR, "Unable to allocate flow table entry\n");
+ goto release_lock;
+ }
+
+ /* Update the implied SVIF */
+ ulp_rte_parser_implicit_match_port_process(¶ms);
+
+ /* Parse the rte flow action */
+ ret = bnxt_ulp_rte_parser_act_parse(actions, ¶ms);
+ if (unlikely(ret != BNXT_TF_RC_SUCCESS))
+ goto free_fid;
+
+ /* Verify the jump target group id */
+ if (ULP_BITMAP_ISSET(params.act_bitmap.bits, BNXT_ULP_ACT_BIT_JUMP)) {
+ memcpy(&group_id,
+ ¶ms.act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_JUMP],
+ BNXT_ULP_ACT_PROP_SZ_JUMP);
+ if (rte_cpu_to_be_32(group_id) == attr->group) {
+ BNXT_DRV_DBG(ERR, "Jump action cannot jump to its own group.\n");
+ ret = BNXT_TF_RC_ERROR;
+ goto free_fid;
+ }
+ }
+
+ mparms.flow_id = fid;
+ mparms.func_id = func_id;
+ mparms.port_id = params.port_id;
+
+ /* Perform the rte flow post process */
+ bnxt_ulp_rte_parser_post_process(¶ms);
+
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG_PARSER
+ /* Dump the rte flow action */
+ ulp_parser_act_info_dump(¶ms);
+#endif
+#endif
+
+ ret = ulp_matcher_action_match(¶ms, ¶ms.act_tmpl);
+ if (unlikely(ret != BNXT_TF_RC_SUCCESS))
+ goto free_fid;
+
+ bnxt_ulp_init_mapper_params(&mparms, ¶ms,
+ BNXT_ULP_FDB_TYPE_DEFAULT);
+ /* Call the ulp mapper to create the flow in the hardware. */
+ ret = ulp_mapper_flow_create(ulp_ctx, &mparms, NULL);
+ if (unlikely(ret))
+ goto free_fid;
+
+ bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx);
+
+ *flow_id = fid;
+ return 0;
+
+free_fid:
+ ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, fid);
+release_lock:
+ bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx);
+flow_error:
+ return ret;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index c595e7cfc3..721e8f4992 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -858,6 +858,37 @@ ulp_mapper_field_port_db_process(struct bnxt_ulp_mapper_parms *parms,
return 0;
}
+static int32_t
+ulp_mapper_field_port_db_write(struct bnxt_ulp_mapper_parms *parms,
+ uint32_t port_id,
+ uint16_t idx,
+ uint8_t *val,
+ uint32_t length)
+{
+ enum bnxt_ulp_port_table port_data = idx;
+ uint32_t val32;
+
+ switch (port_data) {
+ case BNXT_ULP_PORT_TABLE_PHY_PORT_MIRROR_ID:
+ if (ULP_BITS_2_BYTE(length) > sizeof(val32)) {
+ BNXT_DRV_DBG(ERR, "Invalid data length %u\n", length);
+ return -EINVAL;
+ }
+ memcpy(&val32, val, ULP_BITS_2_BYTE(length));
+ if (unlikely(ulp_port_db_port_table_mirror_set(parms->ulp_ctx,
+ port_id,
+ val32))) {
+ BNXT_DRV_DBG(ERR, "Invalid port id %u\n", port_id);
+ return -EINVAL;
+ }
+ break;
+ default:
+ BNXT_DRV_DBG(ERR, "Invalid port_data %d\n", port_data);
+ return -EINVAL;
+ }
+ return 0;
+}
+
static int32_t
ulp_mapper_field_src_process(struct bnxt_ulp_mapper_parms *parms,
enum bnxt_ulp_field_src field_src,
@@ -3569,6 +3600,10 @@ ulp_mapper_func_info_process(struct bnxt_ulp_mapper_parms *parms,
process_src1 = 1;
case BNXT_ULP_FUNC_OPC_COND_LIST:
break;
+ case BNXT_ULP_FUNC_OPC_PORT_TABLE:
+ process_src1 = 1;
+ process_src2 = 1;
+ break;
default:
break;
}
@@ -3680,6 +3715,12 @@ ulp_mapper_func_info_process(struct bnxt_ulp_mapper_parms *parms,
&res, sizeof(res)))
return -EINVAL;
break;
+ case BNXT_ULP_FUNC_OPC_PORT_TABLE:
+ rc = ulp_mapper_field_port_db_write(parms, res1,
+ func_info->func_dst_opr,
+ (uint8_t *)&res2,
+ func_info->func_oper_size);
+ return rc;
default:
BNXT_DRV_DBG(ERR, "invalid func code %u\n",
func_info->func_opc);
@@ -3842,7 +3883,7 @@ ulp_mapper_cond_execute_list_process(struct bnxt_ulp_mapper_parms *parms,
{
struct bnxt_ulp_mapper_cond_list_info *execute_info;
struct bnxt_ulp_mapper_cond_list_info *oper;
- int32_t cond_list_res, cond_res = 0, rc = 0;
+ int32_t cond_list_res = 0, cond_res = 0, rc = 0;
uint32_t idx;
/* set the execute result to true */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 384b89da46..6907771725 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -513,6 +513,49 @@ ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt,
return 0;
}
+/*
+ * Api to get the socket direct svif for a given device port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * port_id [in] device port id
+ * svif [out] the socket direct svif of the given device index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_dev_port_socket_direct_svif_get(struct bnxt_ulp_context *ulp_ctxt,
+ uint32_t port_id,
+ uint16_t *svif)
+{
+ struct bnxt_ulp_port_db *port_db;
+ uint32_t ifindex;
+ uint16_t phy_port_id, func_id;
+
+ port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+
+ if (!port_db || port_id >= RTE_MAX_ETHPORTS) {
+ BNXT_DRV_DBG(ERR, "Invalid Arguments\n");
+ return -EINVAL;
+ }
+ if (!port_db->dev_port_list[port_id])
+ return -ENOENT;
+
+ /* Get physical port id */
+ ifindex = port_db->dev_port_list[port_id];
+ func_id = port_db->ulp_intf_list[ifindex].drv_func_id;
+ phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id;
+
+ /* Calculate physical port id for socket direct port */
+ phy_port_id = phy_port_id ? 0 : 1;
+ if (phy_port_id >= port_db->phy_port_cnt) {
+ BNXT_DRV_DBG(ERR, "Invalid Arguments\n");
+ return -EINVAL;
+ }
+
+ *svif = port_db->phy_port_list[phy_port_id].port_svif;
+ return 0;
+}
+
/*
* Api to get the port type for a given ulp ifindex.
*
@@ -812,3 +855,49 @@ ulp_port_db_port_table_scope_get(struct bnxt_ulp_context *ulp_ctxt,
}
return -EINVAL;
}
+
+/* Api to get the PF Mirror Id for a given port id
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * port_id [in] dpdk port id
+ * mirror id [in] mirror id
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_port_table_mirror_set(struct bnxt_ulp_context *ulp_ctxt,
+ uint16_t port_id, uint32_t mirror_id)
+{
+ struct ulp_phy_port_info *port_data;
+ struct bnxt_ulp_port_db *port_db;
+ struct ulp_interface_info *intf;
+ struct ulp_func_if_info *func;
+ uint32_t ifindex;
+
+ port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+ if (!port_db) {
+ BNXT_DRV_DBG(ERR, "Invalid Arguments\n");
+ return -EINVAL;
+ }
+
+ if (ulp_port_db_dev_port_to_ulp_index(ulp_ctxt, port_id, &ifindex)) {
+ BNXT_DRV_DBG(ERR, "Invalid port id %u\n", port_id);
+ return -EINVAL;
+ }
+
+ intf = &port_db->ulp_intf_list[ifindex];
+ func = &port_db->ulp_func_id_tbl[intf->drv_func_id];
+ if (!func->func_valid) {
+ BNXT_DRV_DBG(ERR, "Invalid func for port id %u\n", port_id);
+ return -EINVAL;
+ }
+
+ port_data = &port_db->phy_port_list[func->phy_port_id];
+ if (!port_data->port_valid) {
+ BNXT_DRV_DBG(ERR, "Invalid phy port\n");
+ return -EINVAL;
+ }
+
+ port_data->port_mirror_id = mirror_id;
+ return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index ef164f1e9b..8a2c08fe67 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -70,6 +70,7 @@ struct ulp_phy_port_info {
uint16_t port_spif;
uint16_t port_parif;
uint16_t port_vport;
+ uint32_t port_mirror_id;
};
/* Structure for the Port database */
@@ -240,6 +241,20 @@ ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt,
uint32_t phy_port,
uint16_t *svif);
+/*
+ * Api to get the socket direct svif for a given device port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * port_id [in] device port id
+ * svif [out] the socket direct svif of the given device index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_dev_port_socket_direct_svif_get(struct bnxt_ulp_context *ulp_ctxt,
+ uint32_t port_id,
+ uint16_t *svif);
+
/*
* Api to get the port type for a given ulp ifindex.
*
@@ -379,4 +394,17 @@ ulp_port_db_port_vf_fid_get(struct bnxt_ulp_context *ulp_ctxt,
int32_t
ulp_port_db_port_table_scope_get(struct bnxt_ulp_context *ulp_ctxt,
uint16_t port_id, uint8_t **tsid);
+
+/* Api to get the PF Mirror Id for a given port id
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * port_id [in] dpdk port id
+ * mirror id [in] mirror id
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_port_table_mirror_set(struct bnxt_ulp_context *ulp_ctxt,
+ uint16_t port_id, uint32_t mirror_id);
+
#endif /* _ULP_PORT_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index dbd8a118df..dd5985cd7b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -307,6 +307,14 @@ bnxt_ulp_comp_fld_intf_update(struct ulp_rte_parser_params *params)
BNXT_ULP_CF_IDX_VF_FUNC_PARIF,
parif);
+ /* Set VF func SVIF */
+ if (ulp_port_db_svif_get(params->ulp_ctx, ifindex,
+ BNXT_ULP_CF_IDX_VF_FUNC_SVIF, &svif)) {
+ BNXT_DRV_DBG(ERR, "ParseErr:ifindex is not valid\n");
+ return;
+ }
+ ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_VF_FUNC_SVIF,
+ svif);
} else {
/* Set DRV func PARIF */
if (ulp_port_db_parif_get(params->ulp_ctx, ifindex,
@@ -319,6 +327,15 @@ bnxt_ulp_comp_fld_intf_update(struct ulp_rte_parser_params *params)
ULP_COMP_FLD_IDX_WR(params,
BNXT_ULP_CF_IDX_DRV_FUNC_PARIF,
parif);
+
+ /* Set DRV SVIF */
+ if (ulp_port_db_svif_get(params->ulp_ctx, ifindex,
+ BNXT_ULP_DRV_FUNC_SVIF, &svif)) {
+ BNXT_DRV_DBG(ERR, "ParseErr:ifindex is not valid\n");
+ return;
+ }
+ ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DRV_FUNC_SVIF,
+ svif);
}
if (mtype == BNXT_ULP_INTF_TYPE_PF) {
ULP_COMP_FLD_IDX_WR(params,
--
2.39.3
next prev parent reply other threads:[~2024-08-30 13:56 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-30 14:00 [PATCH 00/47] TruFlow update for Thor2 Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 01/47] net/bnxt: tf_core: fix wc tcam multi slice delete issue Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 02/47] net/bnxt: tf_core: tcam manager data corruption Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 03/47] net/bnxt: tf_core: External EM support cleanup Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 04/47] net/bnxt: tf_core: Thor TF EM key size check Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 05/47] net/bnxt: tf_core: flow scale improvement Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 06/47] net/bnxt: tf_core: TF support flow scale query Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 07/47] net/bnxt: tf_core: fix slice count in case of HA entry move Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 08/47] net/bnxt: tf_core: convert priority based TCAM manager to dynamic allocation Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 09/47] net/bnxt: tf_core: remove dead AFM code from session-based priority TCAM mgr Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 10/47] net/bnxt: tf_core: remove dead " Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 11/47] net/bnxt: tfc: support tf-core for Thor2 Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 12/47] net/bnxt: tf_ulp: add vxlan-gpe base support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 13/47] net/bnxt: tf_ulp: add custom l2 etype tunnel support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 14/47] net/bnxt: tf_ulp: add support for vf to vf flow offload Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 15/47] net/bnxt: tf_ulp: Wh+ mirroring support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 16/47] net/bnxt: tf_ulp: miscellaneous fixes Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 17/47] net/bnxt: tf_ulp: support for Thor2 ulp layer Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 18/47] net/bnxt: tf_ulp: add support for overlapping flows Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 19/47] net/bnxt: tf_ulp: convert recipe table to dynamic memory Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 20/47] net/bnxt: tf_ulp: add feature bit support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 21/47] net/bnxt: tf_ulp: add action read and clear support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 22/47] net/bnxt: tf_ulp: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 23/47] net/bnxt: tf_ulp: VFR updates for Thor 2 Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 24/47] net/bnxt: tf_ulp: add support for tunnel flow stats Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 25/47] net/bnxt: tf_ulp: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 26/47] net/bnxt: tf_ulp: enable recipe id generation Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 27/47] net/bnxt: tf_ulp: fixed parent child db counters Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 28/47] net/bnxt: tf_ulp: modify return values to adhere to C coding standard Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 29/47] net/bnxt: tf_ulp: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 30/47] net/bnxt: tf_ulp: add mask defaults when mask is not specified Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 31/47] net/bnxt: tf_ulp: add jump action support Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 32/47] net/bnxt: tf_ulp: add support for flow priority Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 33/47] net/bnxt: tf_ulp: support for dynamic tunnel ports Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 34/47] net/bnxt: tf_ulp: add rte_mtr support for Thor2 Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 35/47] net/bnxt: tf_ulp: TF support flow scale query Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 36/47] net/bnxt: tf_ulp: add support for rss flow query to ULP Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 37/47] net/bnxt: tf_ulp: add track type feature to tables Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 38/47] net/bnxt: tf_ulp: inline utility functions and use likely/unlikely Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 39/47] net/bnxt: tf_ulp: switch ulp to use rte crc32 hash Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 40/47] net/bnxt: tf_ulp: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 41/47] net/bnxt: tf_ulp: support a few generic template items Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 42/47] net/bnxt: tf_ulp: TFC support flow scale query for Thor2 Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 43/47] net/bnxt: tf_ulp: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 44/47] net/bnxt: tf_ulp: enable support for truflow feature configuration Sriharsha Basavapatna
2024-08-30 14:00 ` Sriharsha Basavapatna [this message]
2024-08-30 14:00 ` [PATCH 46/47] net/bnxt: update template files Sriharsha Basavapatna
2024-08-30 14:00 ` [PATCH 47/47] net/bnxt: tf_ulp: add stats cache for thor2 Sriharsha Basavapatna
2024-09-25 12:02 ` [PATCH 00/47] TruFlow update for Thor2 Ajit Khaparde
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240830140049.1715230-46-sriharsha.basavapatna@broadcom.com \
--to=sriharsha.basavapatna@broadcom.com \
--cc=dev@dpdk.org \
--cc=kishore.padmanabha@broadcom.com \
--cc=michael.baucom@broadcom.com \
--cc=shuanglin.wang@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).